Enabling multitouch in input-evdev

Bradley T. Hughes bradley.hughes at nokia.com
Mon Jan 18 23:42:47 PST 2010


On 01/18/2010 04:33 AM, ext Peter Hutterer wrote:
> On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
>> On 01/12/2010 12:03 PM, ext Peter Hutterer wrote:
>>>> So, first question: is my behavior the good one? (not being
>>>> compliant with Windows or MacOS)
>>>
>>> Short answer - no. Long answer - sort-of.
>>>
>>> Multitouch in X is currently limited by the lack of multitouch events in the
>>> protocol. What you put into evdev is a way around it to get multitouch-like
>>> features through a multipointer system. As Bradley said, it is likely better
>>> for the client-side to include the lot in a single event.  Since X
>>> essentially exists to make GUI applications easier (this may come as a
>>> surprise to many), I'd go with his stance.
>>>
>>> However, this is the harder bit and would require changing the driver, parts
>>> of the X servers's input system, the protocol and the libraries. It'd be
>>> about as wide-reaching as MPX though I hope that there is significantly less
>>> rework needed in the input subsystem now.
>>
>> Why do you think it would require protocol changes? For the new
>> event type? If I understand it correctly, events for devices can
>> contain any number of valuators... is it possible to have x1,y1
>> x2,y2 x3,y3 and so-on?
>
> correct, there's a valuator limit of 36 but even that should be fine for a
> single device. with axis labelling it's now even possible to simply claim:
> hey, here's 24 axes but they represent 12 different touchpoints.

> I hadn't really thought about this approach yet because IMO touch is more
> than just simply pairs of coordinates and that's what I'd eventually like to
> get in.

Understood.

 > As an intermediate option your approach would definitely work, it'd
> be easy to integrate and should hold up with the current system.

That was my thinking as well.

> bonus point is also that the core emulation automatically works on the first
> touchpoint only, without any extra help.

This was my thinking as well, as well as not having to wrap my head around 
how to deal with multiple implicit grabs in the presence of multiple events.

> And whether more is needed (i.e. complex touch events) is something that can
> be decided later when we have a bit more experience on what apps may need.
> Stephane, do you have opinions on this?

I agree. I do know of people that have built their own multi-touch tables 
and are interested in multi-user interactions, so I suspect that we will see 
interest in this eventually.

>>  From what I can tell, there are a number of challenges that would
>> require driver and server changes. In particular, being able to do
>> collaborative work on a large multi-touch surface requires the
>> driver (or server, not sure which makes most sense) to be able to
>> somehow split the touch points between multiple windows. This is
>> something that we had to do in Qt at least.
>
> packing all coordinates into one event essentially makes it a single
> point with auxiliary touchpoints. This is useful for a number of things
> (most gestures that we see today like pinch and rotate should work in this
> approach) but not useful once you have truly independent data from multiple
> hands or users. that's when you have to do more flexible picking in the
> server and that requires more work.
>
> given the easy case of a single user interacting with a surface:
> with Qt as it is now, if you get all the coordinates in a single XI2 event,
> would that work for you? or is there extra information you need?

That should work. Ideally I would like to also get some kind of indication 
from the device that it is a touch device, and what kind of touch device it 
is (it is a touchscreen or a touchpad? for example, we treat them slightly 
differently).

-- 
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway


More information about the xorg-devel mailing list