multitouch
Simon Thum
simon.thum at gmx.de
Mon Feb 8 03:25:28 PST 2010
>> A gesture recogniser instance will be mandatory. However, a client that
>> modifies the list of input devices on demand and quite frequently hopefully
>> won't. Benjamin's approach puts quite a load on the server and on all
>> clients (presence events are sent to every client), IMO unnecessarily.
>
> why should one be at the xi2 event level? i'm dubious of this. i've thought it
> through a lot - you want gesture recognition happening higher up in the toolkit
> or app. you need context - does that gesture make sense. if one gesture was
> started but it ended in a way that gesture changed, u ned to cancel the
> previous action etc. imho multitouch etc. should stick to delivering as much
> info that the HW provides as cleanly and simply as possible via xi2 with
> minimal interruption of existing app functionality.
FWIW, when I last thought through this, I also(?) leaned towards an
approach which would roughly resemble the composite extension, but for
input. Like with other hook-based things, it would enable best practices
to emerge over time.
> but we have a problem now... we only have master and slave. we need N levels. i
> need on a collaborative table:
>
> person
> / \
> hand hand
> / | | | | \
> finger / | | \ finger
> finger | | finger
> finger finger
>
> in the end... n levels is likely going to be needed. we can flatten this sure,
> but in the end you will not be able to anymore. :(
+1
But you need a specific level corresponding to a master device/virtual
input. BTW, having special group start/end events in the stream is a
possible impl ;)
(Feel free to ignore me anyway, the project I did this for never really
started so I'm just trying to not have it died in vain; also it was a
quite different use case. I no longer have a real interest.)
Cheers,
Simon
More information about the xorg-devel
mailing list