Another approach to multitouch handling

Peter Hutterer peter.hutterer at who-t.net
Sun Jun 6 21:26:35 PDT 2010


On Wed, Jun 02, 2010 at 04:40:34PM +0200, Carlos Garnacho wrote:
> I've been discussing with Peter Hutterer about the convenience of the
> "touchpoints as multiple valuators" approach, and how it could (IMHO)
> delay adoption in the short/mid term for anything related to multitouch.
 
[...]

> =The proposal=
> 
>         The multitouch capable hw device would have a main device
>         created, which is able to send core events and be attached to a
>         MD, the evdev driver would also create several floating devices
>         (one for each touchpoint), unable to send core events nor to be
>         attached to a MD (I've disabled XI86_POINTER_CAPABLE for these,
>         but the server doesn't seem to honor that).
>         
>         The only purpose for the main device would be routing events for
>         one of the floating touchpoints. Whenever a new touch happens,
>         and the main device isn't already routing events from another
>         touch, the events that such touchpoint generates would be sent
>         through the main device instead.
>         
>         This means that there would be N+1 devices for N touchpoints, so
>         at least 1 of these devices wouldn't be sending events, this
>         makes touchpoints somewhat anonymous for multitouch purposes,
>         but the routed touchpoint would remain constant as long as it's
>         operating on the device (press -> ... -> release). This also
>         provides sane backwards compatibility, non-XI2 clients would
>         just see core events from the main device.
>         
>         I've been experimenting with this concept, and together with a
>         ~200LOC patch to GTK+ master (master is already XI2 capable)
>         I've got things working out of the box, also wrt hotplugging.
> 
> =The code=
> 
>         http://cgit.freedesktop.org/~carlosg/xf86-input-evdev/log/?h=multitouch-subdevs
>         
>         I've started off Benjamin's multitouch-subdevs branch for this
>         proof of concept.
> 
> Ideas? comments?

some more background here for others because some of the talks were on
private email:

I've continuously failed to get a multitouch proposal together where touch
points act like pointers to clients. It always runs into the same walls with
the biggest one being that the transitive nature of a touchpoint is quite
imcompatible with many of the core protocol's assumptions.

For example, the only way the X server can legally break a grab is by
unmapping the window. That, combined with the race conditions exposed by a
client delayingly grabbing a pointer that's not even there anymore make it
rather hard.

One of the reasons the current approach with stuffing MT data into valuators
was picked was because it is implementable right now and at least had some
positive reception.
 
Recently, I changed my requirements and figured that we may not need to have
MT core event support in the protocol but rather leave this up to the
toolkits. So instead of having core events from MT devices we send MT events
down the wire and the MT-aware toolkit converts those into the required
callbacks.

When I asked Carlos about this, he had already started the work above which
overlaps to a large degree (though his implementation is different for
technical reasons).

The main concept that I think we might need eventually here is twofold:
- A new device type (let's call it "Direct Input Device", DID) that does not
  require the abstraction between physical and virtual input device that we
  have with the MD/SD hierarchy. Unlike a mouse, where you interact on the
  physical device is where you want the interaction to happen.

- DID's are non-core devices that are _not_ core devices and thus only send
  XI2 events. This allows them to be transient with the protocol crafted
  around their requirements.

The first DID could act like MD and thus send core events, leaving
rudimentary single-touch capabilities. Because core falls away, we can
sidestep the grab handling on the whole lot.
What's not sorted out yet is sane keyboard handling, it most likely requires
the introduction of touch groups that share input focus between multiple
DIDs and of course keyboard would then need to be attached to DIDs instead
of SD's making it interesting for XI 2.0 clients.

Carlos' implementation gets us similar effects already by using slave
devices instead of a new type of devices but especially when we think about
new event types I think having DIDs might be the long-term solution.

So yeah, I'm rather optimistic about this approach though there are some
issues yet to be solved. If you want to chime in, please do so.
(Also, no code exists yet, this is just hot air from my side so far)

Cheers,
  Peter


More information about the xorg-devel mailing list