Enabling multitouch in input-evdev

Benjamin Tissoires tissoire at cena.fr
Mon Jan 18 01:15:53 PST 2010


Peter Hutterer wrote:
> On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
>   
>> Why do you think it would require protocol changes? For the new
>> event type? If I understand it correctly, events for devices can
>> contain any number of valuators... is it possible to have x1,y1
>> x2,y2 x3,y3 and so-on?
>>     
>
> correct, there's a valuator limit of 36 but even that should be fine for a
> single device. with axis labelling it's now even possible to simply claim:
> hey, here's 24 axes but they represent 12 different touchpoints.
> I hadn't really thought about this approach yet because IMO touch is more
> than just simply pairs of coordinates and that's what I'd eventually like to
> get in. As an intermediate option your approach would definitely work, it'd
> be easy to integrate and should hold up with the current system.
>
> bonus point is also that the core emulation automatically works on the first
> touchpoint only, without any extra help.
>
> And whether more is needed (i.e. complex touch events) is something that can
> be decided later when we have a bit more experience on what apps may need.
> Stephane, do you have opinions on this?
>
>   
I agree we can have multiple XI2 valuators to keep the touches packed. 
However, how can we tell the toolkit a touch started/ended or has been 
aborted (I think it is the behavior in MacOS, but I'm not sure) ? I 
don't think we could rely on button events as the other layers will 
receive buttons up/down that has no meaning.

Maybe a solution would be to have a third valuator for each touch with 
the tracking id, for instance:
-1 means no value
-2 means error (or aborted)
 > 0 means started and active

>> From what I can tell, there are a number of challenges that would
>> require driver and server changes. In particular, being able to do
>> collaborative work on a large multi-touch surface requires the
>> driver (or server, not sure which makes most sense) to be able to
>> somehow split the touch points between multiple windows. This is
>> something that we had to do in Qt at least.
>>     
>
> packing all coordinates into one event essentially makes it a single
> point with auxiliary touchpoints. This is useful for a number of things
> (most gestures that we see today like pinch and rotate should work in this
> approach) but not useful once you have truly independent data from multiple
> hands or users. that's when you have to do more flexible picking in the
> server and that requires more work.
>
> given the easy case of a single user interacting with a surface:
> with Qt as it is now, if you get all the coordinates in a single XI2 event,
> would that work for you? or is there extra information you need?
>
> Cheers,
>   Peter
> _______________________________________________
> xorg-devel mailing list
> xorg-devel at lists.x.org
> http://lists.x.org/mailman/listinfo/xorg-devel
>   
For the question of multiple users, if we consider that the toolkit has 
to do the job for the gestures, it can also control a new cursor thanks 
to Xtest. However, it will introduce some lag in the loop.... Maybe we 
want an intermediate solution: the toolkit specify which track has to be 
split from the rest of the events in a new device (in the same way my 
patch works). It's just an idea.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.x.org/archives/xorg-devel/attachments/20100118/c54132df/attachment.html 


More information about the xorg-devel mailing list