<br><br><div class="gmail_quote">On Mon, Mar 1, 2010 at 3:26 PM, Matthew Ayres <span dir="ltr"><<a href="mailto:solar.granulation@gmail.com">solar.granulation@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div class="gmail_quote"><div class="im">On Mon, Mar 1, 2010 at 3:05 PM, Bradley T. Hughes <span dir="ltr"><<a href="mailto:bradley.hughes@nokia.com" target="_blank">bradley.hughes@nokia.com</a>></span> wrote:<br></div>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div><div class="im">On 03/01/2010 03:34 PM, ext Daniel Stone wrote:<br>
</div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"><div class="im">
Hi,<br>
<br>
On Mon, Mar 01, 2010 at 02:56:57PM +0100, Bradley T. Hughes wrote:<br>
</div><div class="im"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<br>
This is where the context confusion comes in. How do we know what the<br>
user(s) is/are trying to do solely based on a set of x/y/z/w/h<br>
coordinates? In some cases, a single device with multiple axes is enough,<br>
but in other cases it is not.<br>
</blockquote>
<br>
Sure. But in this case you don't get any extra information from having<br>
multiple separate devices vs. a single device. The only difference --<br>
aside from being able to direct events to multiple windows -- is the<br>
representation.<br>
</div></blockquote>
<br></div><div class="im">
Correct. However, I think that being able to direct events to multiple windows is the main reason we're having this particular discussion. How do we do it, given the current state of the art?<br></div></blockquote><div>
<br>
This question made me feel like I was at an icecream stall, trying to pick a flavour I like that doesn't have too many bugs in it :P <br><br><br></div><div class="im"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div><div><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
If the hardware is intelligent enough to be able to pick out different<br>
fingers, then cool, we can split it all out into separate focii and it's<br>
quite easy.<br>
</blockquote>
<br></div></div><br>
I don't think hardware is that intelligent... yet. I forget the name of the program (not CCV as far as I know), but there does exist a program that implements the TUIO protocol WITH support for object-id's. It can do object recognition under special circumstances by looking for and identifying infrared reflectors placed on the table's surface (and these reflectors are often attached to an object). Programs could then map these object id's to something meaningful (object id 5, mapped to "Brad's phone", could sync my email, for example). I don't know of anything that tries to identify individual fingers, though.</blockquote>
<div> </div></div><div>reacTIVision. My very involvement here is a result of wanting to use reacTIVision's fiducial markers in MPX. I consider the availability of fiducial tracking vital and imagine each registered fiducial being slaved to a unique MD.<br>
<br>I have high hopes of Ryan Huffman's xf86-input-tuio driver and am looking forward to inclusion of certain features to ease this behaviour.<br><br></div><div class="im"><blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
<div>
<br>
<blockquote class="gmail_quote" style="margin: 0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;">
Failing that, how are we supposed to do it? Say two people have a<br>
logical button press active (mouse button, finger down, pen down,<br>
whatever) at once. Now a third button press comes along ... what do we<br>
do? Is it a gesture related to one of the two down? If so, which one<br>
(and which order do we ask them in, etc). A couple of years ago we<br>
still could've guessed, but as Qt and GTK are now doing client-side<br>
windows, it's really hard to even make a _guess_ in the server.<br>
</blockquote>
<br></div>
Right, and this was Peter's point... the X server can't know it and shouldn't try to guess. What I did in Qt was to deliver the 3rd touch point together with its closest neighbor (if the 3rd touch point was not over a widget explicitly asking for touch events, that is).</blockquote>
</div><div><br>To me this sounds almost to be saying that touch events should be handled no differently than mouse events, but that doesn't seem right. A mouse is always present, it always has a position. A touch-sensitive slave/physical device may always be attached, but unless something is touching it, isn't it essentially absent?<br>
</div></div>
</blockquote></div><br>