<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=ISO-8859-1" http-equiv="Content-Type">
<title></title>
</head>
<body bgcolor="#ffffff" text="#000000">
Peter Hutterer wrote:
<blockquote cite="mid:20100118033336.GJ2439@barra.conf.lca2010.org.nz"
type="cite">
<pre wrap="">On Thu, Jan 14, 2010 at 11:23:23AM +0100, Bradley T. Hughes wrote:
</pre>
<blockquote type="cite">
<pre wrap="">Why do you think it would require protocol changes? For the new
event type? If I understand it correctly, events for devices can
contain any number of valuators... is it possible to have x1,y1
x2,y2 x3,y3 and so-on?
</pre>
</blockquote>
<pre wrap=""><!---->
correct, there's a valuator limit of 36 but even that should be fine for a
single device. with axis labelling it's now even possible to simply claim:
hey, here's 24 axes but they represent 12 different touchpoints.
I hadn't really thought about this approach yet because IMO touch is more
than just simply pairs of coordinates and that's what I'd eventually like to
get in. As an intermediate option your approach would definitely work, it'd
be easy to integrate and should hold up with the current system.
bonus point is also that the core emulation automatically works on the first
touchpoint only, without any extra help.
And whether more is needed (i.e. complex touch events) is something that can
be decided later when we have a bit more experience on what apps may need.
Stephane, do you have opinions on this?
</pre>
</blockquote>
I agree we can have multiple XI2 valuators to keep the touches packed.
However, how can we tell the toolkit a touch started/ended or has been
aborted (I think it is the behavior in MacOS, but I'm not sure) ? I
don't think we could rely on button events as the other layers will
receive buttons up/down that has no meaning. <br>
<br>
Maybe a solution would be to have a third valuator for each touch with
the tracking id, for instance:<br>
-1 means no value<br>
-2 means error (or aborted)<br>
> 0 means started and active<br>
<br>
<blockquote cite="mid:20100118033336.GJ2439@barra.conf.lca2010.org.nz"
type="cite">
<pre wrap=""></pre>
<blockquote type="cite">
<pre wrap="">From what I can tell, there are a number of challenges that would
require driver and server changes. In particular, being able to do
collaborative work on a large multi-touch surface requires the
driver (or server, not sure which makes most sense) to be able to
somehow split the touch points between multiple windows. This is
something that we had to do in Qt at least.
</pre>
</blockquote>
<pre wrap=""><!---->
packing all coordinates into one event essentially makes it a single
point with auxiliary touchpoints. This is useful for a number of things
(most gestures that we see today like pinch and rotate should work in this
approach) but not useful once you have truly independent data from multiple
hands or users. that's when you have to do more flexible picking in the
server and that requires more work.
given the easy case of a single user interacting with a surface:
with Qt as it is now, if you get all the coordinates in a single XI2 event,
would that work for you? or is there extra information you need?
Cheers,
Peter
_______________________________________________
xorg-devel mailing list
<a class="moz-txt-link-abbreviated" href="mailto:xorg-devel@lists.x.org">xorg-devel@lists.x.org</a>
<a class="moz-txt-link-freetext" href="http://lists.x.org/mailman/listinfo/xorg-devel">http://lists.x.org/mailman/listinfo/xorg-devel</a>
</pre>
</blockquote>
For the question of multiple users, if we consider that the toolkit has
to do the job for the gestures, it can also control a new cursor thanks
to Xtest. However, it will introduce some lag in the loop.... Maybe we
want an intermediate solution: the toolkit specify which track has to
be split from the rest of the events in a new device (in the same way
my patch works). It's just an idea.<br>
</body>
</html>