Multitouch followup: gesture recognition?
Florian Echtler
floe at butterbrot.org
Fri Apr 2 01:52:28 PDT 2010
> >>> Just specifying what gestures a specific window would be interested in
> >>> wouldn't usually be "live", would it? That's something defined at
> >>> creation time and maybe changed occasionally over the lifetime, but not
> >>> constantly.
> >> Which is why a declarative approach is OK for that. It's the dynamics
> >> that make it harder. More specificially, the dynamic part of your
> >> formalism likely needs tailored requests.
> > The reason for this being that the special client won't be notified of
> > property changes on other client windows, correct?
> Not quite, the sgc could probably register for prop changes. By
> 'dynamics' I was referring to cancelling a gesture or other gesture
> state feedback a client may want to send. Props aren't good for that,
> but requests are.
> In requests, you're free to define semantics, whereas props are limited
> and quite racy.
OK, I see. I'll probably stay with properties for the first attempt (the
protocol used in my userspace lib doesn't require any such realtime callbacks
right now). I'll probably blatantly ignore _any_ performance-related
issues in the prototype, just to get a general feel for the problem.
> >> If you want to try a special client, it's therefore sensible to divide
> >> your requests and events into route-through (client -> special gesture
> >> client or sgc -> client) and server-processed (server->sgc or sgc->
> >> server), if possible.
> > As far as I understand the architecture, everything except the plain
> > input events would be just routed through the server between the two
> > clients. In fact, I guess that after defining some custom events in
> Yes, part of the idea is that the server provides only the
> infrastructure. Routing, simple state tracking, somesuch.
Good - seems I've finally understood that part :-)
> > inputproto, it should be possible to send them through
> > XSend{Extension}Event?
> At first glance it looks suitable, but I'm not convinced it is
> appropriate. You'll want the server to select which clients get events,
> as is done with Xi event masks. This way, the gesture client doesn't
> need to know about all the windows out there.
> Also, I recall Xi2 and Xi1 (XSendExtensionEvent) shouldn't be mixed.
I've had a brief look at the code in libXi, and AFAICT there's nothing
to prevent this from working with any custom event, as long as
_XiEventToWire is adapted, too. Peter, maybe you could comment on this?
> > // select motion events for entire screen
> > XIEventMask mask;
> > mask.deviceid = XIAllDevices;
> > mask.mask_len = XIMaskLen( XI_LASTEVENT );
> > mask.mask = (unsigned char*)calloc( mask.mask_len, sizeof(char) );
> >
> > XISetMask( mask.mask, XI_Motion );
> > XISetMask( mask.mask, XI_ButtonPress );
> > XISetMask( mask.mask, XI_ButtonRelease );
> >
> > XISelectEvents( display, DefaultRootWindow(display), &mask, 1 );
> > free( mask.mask );
> >
> > to capture all XInput events, however, I believe that's also quite
> > flawed. What other options exist?
> To me it seems sane.
> This replication of all input is one of the reasons for the 'special' in
> 'special gesture client'. Whatever it shall be it should probably be a
> part of Xi2. What leads you to think the above is flawed ?
The main reason why this code isn't yet sufficient IMHO is that I
haven't yet found out how to get some additional data from the received
events, particularly
a) which client window the event is actually targeted at and
b) what the position in window-relative coordinates is.
These are probably related, can you give me a hint how to retrieve this
information?
Florian
--
0666 - Filemode of the Beast
More information about the xorg-devel
mailing list