multitouch

Bradley T. Hughes bradley.hughes at nokia.com
Tue Jan 19 00:08:47 PST 2010


On 01/18/2010 11:54 PM, ext Carsten Haitzler (The Rasterman) wrote:
> hey guys (sorry for starting a new thread - i only just subscribed - lurking on
> xorg as opposed to xorg-devel).
>
> interesting that this topic comes up now... multitouch. i'm here @ samsung and
> got multi-touch capable hardware - supports up to 10 touchpoints, so need
> support.
>
> now... i read the thread. i'm curious. brad - why do u think a single event (vs
> multiple) means less context switches (and thus less power consumption, cpu
> used etc.)?

Even though the events may be buffered (like you mention), there's no 
guarantee that they will fit nicely into the buffer. I'm not say that this 
will always be the case, but I can foresee the need to write code that scans 
the existing event queue, possibly flushes and rereads, scans again, etc. to 
ensure that the client did actually get all of the events that it was 
interested in.

There's also the fact that the current approach that Benjamin suggested 
requires an extra client to manage the slave devices.

I don't have any raw data or anything to back up my claims of course, I'm 
just making observations.

> as such your event is delivered along with possibly many others in a buffer - x
> protocol is buffered and thus read will read as much data as it can into the
> buffer and process it. this means your 2, 3, 4, 5 or more touch events should
> get read (and written from the server side) pretty much all at once and get put
> into a single buffer, then XNextEvent will just walk the buffer processing the
> events. even if by some accident thet dont end up in the same read and buffer
> and you do context switch, you wont save battery as the cpu will have never
> gone idle enough to go into any low power mode.

Even though the events are buffered (like you mention), there's no guarantee 
that they will fit nicely into the buffer. I'm not say that this will always 
be the case, but I can foresee the need to write code that scans the 
existing event queue, possibly flushes and rereads, scans again, etc. to 
ensure that the client did actually get all of the events that it was 
interested in.

 > but as such you should be
> seeing all these events alongside other events (core mousepress/release/motion
> etc. etc. etc.). so i think the argument for all of it in 1 event from a
> power/cpu side i think is a bit specious.

The power savings are probably going to minimal, that I agree to. My 
argument has mostly been on the convenience side of having a single blob 
(based on experiments we did early last year).

> but... do you have actual data to
> show that such events actually dont get buffered in x protocol as they should
> be and dont end up getting read all at once? (i know that my main loop will
> very often read several events from a single select wakeup before going back to
> sleep, as long as the events come in faster than they can be acted on as they
> also get processed and batched into yet another queue before any rendering
> happens at the end of that queue processing).

I don't have any data related to X11 on this, no.

> but - i do see that if osx and windows deliver events as a single blob for
> multiple touches, then if we do something different, we are just creating work
> for developers to adapt to something different. i also see the arguument for
> wanting multiple valuators deliver the coords of multiple fingers for things
> like pinch, zoom, etc. etc. BUT this doesnt work for other uses - eg virtual
> keyboard where i am typing with 2 thumbs - my presses are actually independent
> presses like 2 core pointers in mpx.
 >
> so... i think the multiple valuators vs multiple devices for mt events is moot
> as you can argue it both ways and i dont think either side has specifically a
> stronger case... except doing multiple events from multiple devices works
> better with mpx-aware apps/toolkits, and it works better for the more complex
> touch devices that deliver not just x,y but x, y, width, height, angle,
> pressure, etc. etc. per point (so each point may have a dozen or more valuators
> attached to it), and thus delivering a compact set of points in a single event
> makes life harder for getting all the extra data for the separate touch events.

Indeed. There are cases where one is more convenient over the other and vice 
versa. This is what we struggled with for a while when doing the Qt API for 
multi-touch. In the end, we went with the single blob approach and tag each 
point in the blob with pressed/moved/released state (so that it's possible 
to cover both use cases).

The only thing that concerns me with the idea of sending each touch point as 
a separate device is that it

> so i'd vote for how tissoires did it as it allows for more information per
> touch point to be sanely delivered. as such thats how we have it working right
> now. yes - the hw can deliver all points at once but we produce n events. but
> what i'm wondering is.. should we....
>
> 1. have 1, 2, 3, 4 or more (10) core devices, each one is a touch point.
> 2. have 1 core with 9 slave devices (core is first touch and core pointer)
> 3. have 1 core for first touch and 9 floating devices for the other touches.
>
> they have their respective issues. right now we do #3, but #2 seems very
> logical. #1 seems a bit extreme.

I agree, #1 sounds a bit extreme. An approach like 2 or 3 is also doable.

> remember - need to keep compatibility with single touch (mouse only) events and
> apps as well as expand to be able to get the multi-touch events if wanted.

Exactly. Do #2 and #3 keep that compatibility? My understanding is that if 
we did #2, then the master pointer would still deliver events for all slaces 
(with DeviceChanged events mixed in between). Couldn't this confuse 
non-mulit-touch and/or non-mpx aware clients?

-- 
Bradley T. Hughes (Nokia-D-Qt/Oslo), bradley.hughes at nokia.com
Sandakervn. 116, P.O. Box 4332 Nydalen, 0402 Oslo, Norway


More information about the xorg-devel mailing list