multitouch
Matthew Ayres
solar.granulation at gmail.com
Mon Mar 1 05:52:30 PST 2010
On Mon, Mar 1, 2010 at 12:55 PM, Daniel Stone <daniel at fooishbar.org> wrote:
> Hi,
>
> On Mon, Mar 01, 2010 at 12:09:41PM +0000, Matthew Ayres wrote:
>> On Mon, Mar 1, 2010 at 11:22 AM, Daniel Stone <daniel at fooishbar.org> wrote:
>> > Not to mention the deeply unpleasant races -- unless you grab
>> > XGrabServer, which is prohibitively expensive and extremely anti-social.
>>
>> I'm not sure I understand a race to mean what it is being used to mean
>> here. My interpretation of the term would not, as far as I can see,
>> apply here. If someone could point me to documentation that would
>> explain this type of race, I would appreciate it.
>
> See below for a concrete example ...
Thank you, I understand now.
>> This is roughly the kind of hierarchy I had intended to imply, but
>> there is a caveat. Touchpads and Wacom devices are clear cases of
>> single-user input, but a touch screen must be expected to support more
>> than one simultaneous user. This requires splitting its inputs
>> somehow.
>
> I'm not sure what you mean here?
What I now realise is that what I mean is a race condition. Because
we can't know how many different simultaneous tasks may be intended on
a multitouch screen (rather than more traditional devices) we can't
determine at what level to allow grabs (assuming the multi-layered
model).
>> This is another possible use of the sub-device hand-off I described
>> before (yes, I really meant sub-device rather than slave). Once again
>> it would be up to the application to decide whether or not it wants
>> this input and, if it does not, it can request that it be moved to
>> another device.
>>
>> Advantage: This would enable gestures on small controls, such as
>> existing taskbar volume controls: Touch the icon, swipe a finger near
>> by and that controls the volume?
>>
>> Disadvantage: In creates latency, at best, if the new touch event (on
>> a screen, rather than one of the above mentioned devices) is not
>> intended as part of the same 'gesture'. At worst it creates
>> conceptually erroneous behaviour.
>
> Right. So now imagine the following happens:
> * first finger pressed over window A
> * server delivers event to client C
> * client C: 'ooh hey this could trigger gesture events, give me
> everything'
> * server: okay, cool!
> * second finger pressed over window B
> * but the server notifies client A due to the grab
>
> Now imagine this scenario:
> * first finger pressed over window A
> * server delivers event to client C
> * (client is scheduled out or otherwise busy ...)
> * second finger pressed over window B
> * server delivers event to client D
> * client C: 'oooh hey, that first finger could trigger gesture events,
> give me everything!'
> * meanwhile the volume isn't changed and you've just clicked something
> wholly unrelated on another window; hopefully it's not destructive
>
> Adding additional layers of complexity, uncertainty and unpredictability
> makes this much worse than it can be.
Yes, I see what you mean and it's certainly a problem. The only
solution that I can see, without sacrificing flexibility, introduces
the latency that I alluded to previously. Assuming that fingers 1 and
2 are both child-devices of the same parent, the solution I see is to
enact a pause. Wait for client C to respond (with a timeout) before
continuing to process any input. I don't like the idea, but it would
at least reduce the risk of races.
If client C responds and doesn't want to grab the parent/master (if
that's even feasible) then E2 could be forwarded to client D. But I
really don't see that we can move device grabs, reliably, to sub
devices because it looks like a risk to existing software. Master
devices have always been the targets of grabs. This is why I proposed
handing off to hotplugged MDs.
If the master device is grabbed by a legacy application, it breaks
functionality of later applications that understand the multi-layered
model.
[snip]
> Can you guys (Bradley, Peter, Matthew) think of any specific problems
> with the multi-layered model? Usecases as above would be great, bonus
> points for diagrams. :)
My above remarks are regarding the multi-layered model ;) I really
don't see how it would resolve (what I now realise to be) race
conditions. Especially in the presence of legacy software.
I may have to put together some diagrams, exploring the multi-layered
model (along with its problems with backward compatibility) and my own
previous proposal. Unfortunately my laptop hates UML.
>> A related point: I've read, and assume it is still the case, that for
>> MPX hotplugging is supported. Now if this is the case, is there
>> really much difference between that and creating a new master device
>> when/if it is determined that a new touch event is determined to be a
>> separate point of interaction? Would it not be the case that the
>> server 'hotplugs' a new device and routes the input through it?
>
> It's pretty much exactly the same, yeah.
>
>> If this is too expensive, it just calls for attempts to streamline the process.
>
> Well, there's just not a lot we can do to streamline it. We could beef
> up some of the events and eliminate roundtrips, but fundamentally the
> problem is that it requires the client to grab the device after it's
> created, and the latency here can be entirely arbitrary. When you're
> targetting a few milliseconds _at most_ for event delivery from kernel
> to client, this becomes impossible.
Good point.
More information about the xorg-devel
mailing list