multitouch
Matthew Ayres
solar.granulation at gmail.com
Mon Mar 1 04:09:41 PST 2010
On Mon, Mar 1, 2010 at 11:22 AM, Daniel Stone <daniel at fooishbar.org> wrote:
>
> On Mon, Mar 01, 2010 at 11:50:49AM +0100, Bradley T. Hughes wrote:
> > On 02/27/2010 02:25 PM, ext Matthew Ayres wrote:
> >> The impression I get from reading through this thread is that the
> >> simplest (and therefore possibly best) approach to grouping touch
> >> events is to group them according to which X window they intersect.
> >> That is, where a second touch event takes place in the same window as
> >> the first, it is part of the same master device; where it takes place
> >> elsewhere, it is another master device. I'm not sure why this would
> >> not be a useful assumption.
> >
> > I like this idea (and this is similar what I did in Qt when trying to
> > determine context for a touch-point), the only concern is that Peter and
> > others have comment on how expensive it is to add/remove master devices.
>
> Not to mention the deeply unpleasant races -- unless you grab
> XGrabServer, which is prohibitively expensive and extremely anti-social.
I'm not sure I understand a race to mean what it is being used to mean
here. My interpretation of the term would not, as far as I can see,
apply here. If someone could point me to documentation that would
explain this type of race, I would appreciate it.
[snip]
>
> I still think a multi-level device hierachy would be helpful, thus
> giving us 'subdevice'-alike behaviour. So if we were able to go:
> MD 1 ->
> Touchpad 1 ->
> Finger 1
> Finger 2
> Wacom 1 ->
> Pen 1
> Eraser 1
> MD 2 ->
> Touchpad 2 ->
> Finger 1
>
> and so on, and so forth ... would this be useful enough to let you take
> multi-device rather than some unpredictable hybrid?
This is roughly the kind of hierarchy I had intended to imply, but
there is a caveat. Touchpads and Wacom devices are clear cases of
single-user input, but a touch screen must be expected to support more
than one simultaneous user. This requires splitting its inputs
somehow.
> (What happens in the hybrid system when I get an event from finger 1,
> decide I like it, take out a grab, and then finger 2 presses on another
> window. Do I respect the event and give the app the finger 2 press it
> likely doesn't want, or break the grab and deliver it to another client?
> Neither answer is pleasant.)
This is another possible use of the sub-device hand-off I described
before (yes, I really meant sub-device rather than slave). Once again
it would be up to the application to decide whether or not it wants
this input and, if it does not, it can request that it be moved to
another device.
Advantage: This would enable gestures on small controls, such as
existing taskbar volume controls: Touch the icon, swipe a finger near
by and that controls the volume?
Disadvantage: In creates latency, at best, if the new touch event (on
a screen, rather than one of the above mentioned devices) is not
intended as part of the same 'gesture'. At worst it creates
conceptually erroneous behaviour.
A related point: I've read, and assume it is still the case, that for
MPX hotplugging is supported. Now if this is the case, is there
really much difference between that and creating a new master device
when/if it is determined that a new touch event is determined to be a
separate point of interaction? Would it not be the case that the
server 'hotplugs' a new device and routes the input through it?
If this is too expensive, it just calls for attempts to streamline the process.
> >> Here comes my arrogant proposal: Suppose that the client application
> >> determines, from a given gesture, that actually the new slave/whatever
> >> is trying to act as a separate master. I think it would be useful to
> >> provide a mechanism for the application to tell the server that and to
> >> request that the slave be detached and created a new master. Some
> >> negotiation would be needed of course, but it would be useful (for
> >> instance) if it turns out to be a second user trying to drag something
> >> from another user's window. So what I imagine would go something like
> >> this:
> >>
> >> Touch1 in WindowA (ApplicationX) = MD1 + SD1.
> >> Touch2 in WindowA (ApplicationX) = MD1 + SD2.
> >> ApplicationX determines that Touch2 wants to do something of its own.
> >> ApplicationX tells Xserver to make Touch2 into MD2 + SD1.
> >
> > This is probably possible just by using the techniques described by Peter
> > at http://who-t.blogspot.com/2009/06/xi2-recipies-part-2.html
>
> Rather.
Promising.
> >> So my apologies for butting in like that, but I felt I might as well
> >> say something.
> >
> > There's no need to apologize, is there? Discussions in the open like this
> > are done for exactly this reason, to invite input from others.
>
> Indeed. :)
Thanks :)
More information about the xorg-devel
mailing list