[PATCH inputproto xi 2.1] Updates for pointer emulation and more touch device modes
chase.douglas at canonical.com
Thu Mar 10 12:39:18 PST 2011
I'll be sending an update to the protocol soon. I've picked out just a
few comments here to reply to. The rest should be covered by the updated
On 03/08/2011 11:59 PM, Peter Hutterer wrote:
> On Tue, Mar 08, 2011 at 10:24:42AM -0500, Chase Douglas wrote:
>> On 03/08/2011 12:41 AM, Peter Hutterer wrote:
>>> On Wed, Mar 02, 2011 at 11:35:41AM -0500, Chase Douglas wrote:
>>>> On 03/02/2011 05:58 AM, Daniel Stone wrote:
>>>>> On Tue, Feb 22, 2011 at 10:06:37AM -0500, Chase Douglas wrote:
>> My implementation sets the timestamp of the touch events as they are
>> sent to the client, so the timestamp of replayed events will not match
>> the timestamp of the original events as sent to the grabbing clients. I
>> don't see this as a problem because X timestamps just don't work for
>> multitouch events. Henrik Rydberg implemented a Kalman filter for
>> velocity estimation and compensation in utouch-frame, a library for
>> extracting touch events into frames for easier consumption by the
>> client. The library can work on top of mtdev or XI 2.1. When mtdev is
>> used, the evdev timestamps are used and the filter works well. When XI
>> 2.1 is used we have to disable the filter because the X timestamps are
>> so wildly inaccurate. The correct solution, imo, is to add a valuator
>> axis to the devices whose value represents "device" time. On Linux, this
>> would be set to the timestamps from evdev. The valuator values of the
>> device events are copied into the ring buffer, so when they are replayed
>> the values would be representative of the original events.
> valuators are _not_ fields we can dump random values in just because we
> can't fix them elsewhere. especially for this, we already have an event
> time. if that is "wildly inaccurate" then it's mostl likely a bug. what's
> the cause for the inaccuracy?
I've addressed the timestamp meaning issue in the protocol update.
However, I do want to make a comment about this. While it is useful to
have event timestamps be representative of the input event and to have
them be relative to X server time, it will never be accurate enough for
all use cases. I gave the kalman filter example above, which shows that
the touch event timestamps aren't accurate enough for velocity estimation.
I don't think it's a bug either. The X server runs as a process. The
latency between when an event physically occurs and when the server
handles the event includes an irq context switch, kernel/userspace
context switch, scheduling delays, and then signal context switch
(hopefully this is eliminated with the threaded I/O work). The average
latency is often small, but I suspect its variance is non-trivial
relative to the mean. I think it is unreasonable to expect the X server
timestamps to be useful for things like fine grained velocity estimation.
In contrast, the Linux input system sets the evdev event timestamp to
the current kernel time when the event is generated in irq context. The
average latency here is not only less, but the variance is much less
relative to the mean. We also have devices, like the Apple Magic Mouse,
that provide their own hardware timestamps. The evdev protocol doesn't
support these right now, but I can see someone wanting to get access to
In summary, I think a device timestamp valuator axis would be
beneficial. It's something Henrik Rydberg and I have talked about on
multiple occasions. I hope to send a patch to add an X server property
label for it sometime soon.
>>>>>> + These devices may report touch events that correlate to the two opposite
>>>>>> + corners of the bounding box of all touches. The number of active touch
>>>>>> + sequences represents the number of touches on the device, and the position
>>>>>> + of any given touch event will be equal to either of the two corners of the
>>>>>> + bounding box. However, the physical location of the touches is unknown.
>>>>>> + SemiMultitouch devices are a subset of DependentTouch devices. Although
>>>>>> + DirectTouch and IndependentPointer devices may also be SemiMultitouch
>>>>>> + devices, such devices are not allowed through this protocol.
>>>>> Hmmm. The bounding box being based on corners of separate pointers
>>>>> seems kind of a hack to me. I'd much rather have the touches all be
>>>>> positioned at the midpoint, with the bounding box exposed through
>>>>> separate axes.
>>>> I think the question that highlights our differences is: "Should we
>>>> attempt to handle these devices in the XI 2.1 touch protocol, or fit
>>>> them into the pointer protocol?" In Linux, it's been determined that
>>>> these devices will be handled as multitouch devices. The evdev client
>>>> sees a device with two touch points that are located at the corners of
>>>> the bounding box. The normal synaptics-style event codes for describing
>>>> the number of fingers are used to denote how many touches are active in
>>>> the bounding box.
>>>> I'm of the mindset that these devices should be handled as described in
>>>> XI 2.1. However, I could be persuaded to handle these devices by
>>>> treating them as traditional pointing devices + 5 valuators for
>>>> describing the bounding box and how many touches are active.
>>>>> The last sentence also makes me slightly nervous; it seems like we want
>>>>> SemiMultitouch to actually be an independent property, whereby a device
>>>>> is Direct, Independent or Independent, and then also optionally
>>>>> semi-multitouch. (Possibly just exposing the bounding box axes would be
>>>>> enough to qualify as semi-multitouch.) In fact, IndependentPointer
>>>>> could be similarly be a property of some DependentTouch devices as well.
>>>> I thought about this, but there's a few reasons I did it this way:
>>>> 1. If you want to make it an independent property, then we should change
>>>> the mode field to a bitmask. The field is only 8 bits right now, so we
>>>> could run out of bits very quickly. However, treating the field as an
>>>> integer as it is today allows for 255 variations. We can always revisit
>>>> and add in semi-mt + independent pointer as a new mode later on.
>>>> 2. Semi-mt and direct touch doesn't make sense. You don't know where
>>>> touches are, so you don't know which window to direct events to if the
>>>> bounding box spans multiple windows.
>>>> 3. I believe semi-mt is a dead technology now. I've only ever seen it in
>>>> touchpads, and I don't think they'll ever expand beyond that scope. We
>>>> can always add another device mode if needed.
>> I'm going to assume by the lack of comment here that you're satisfied
>> with this mode?
> tbh. I don't know yet. I obviously can't make these devices go away but i'm
> not sure on the handling for them yet.
I realized that this method won't work very well. For semi-multitouch
devices to be handled properly, a client must know of changes in the
number of touches and bounding box limits in the same event. For
example, two touches may currently define a bounding box. You place a
third touch down that extends the bounding box. The client needs to know
that a third touch began when the bounding box is extended or else it
may think it's a zoom gesture from the original two touches.
I've proposed an alternative in the protocol update I'll be sending out.
>>> - we need to decide if pointer emulation happens if the client selects for
>>> pointer + touch events or if we trust the client to handle this situation
>>>> There's nothing that prevents one client from selecting for touches
>>>> while another client selects for pointer events on the same window.
>>>> However, there is a clear distinction: the pointer selecting client
>>>> knows that it may not be the only receiver of events, while the touch
>>>> selecting client knows it has exclusive right to the touch events.
>>>> Also, delivering an emulated pointer and its associated touch event
>>>> isn't pointless. It's how Windows handles things today, so toolkits like
>>>> Qt are set up to deal with this situation. One could argue that Qt
>>>> could/should be handling things differently for XI 2.1, but I don't have
>>>> a good argument why we should force them to.
>>> what do they do with the emulated pointer event? do they process it or
>>> discard it anyway?
>> It all depends on the widget that events propagate to. My understanding
>> is that widgets in Qt select for touch and pointer events independently,
>> just as in X. The widget will receive both types of events if it
>> subscribes to both. If a widget and its parents don't handle an event,
>> the event is discarded.
>> I'm hoping Denis will correct me if I'm mistaken :).
I wanted to add some of my thoughts here. It was pointed out by Peter in
another thread (which I can't seem to find now :) that although any
number of clients can select for pointer motion and receive events, only
one client will get the events when a button has been pressed. It makes
sense to continue this for direct touch device pointer emulation: if one
client selects for touch events while other clients select for pointer
events, only the touch events will be sent. I've updated the protocol
with this change.
This means toolkits will need to emulate pointer events when they
receive touch events from a direct touch device.
>>>>>> @@ -866,6 +949,9 @@ are required to be 0.
>>>>>> The new master device to attach this slave device to.
>>>>>> + If any clients are selecting for touch events from the slave device, their
>>>>>> + selection will be canceled.
>>>>> Does that mean the selection will be removed completely, and the
>>>>> selection will no longer be present if the SD is removed, and all
>>>>> clients are required to re-select every time the hierachy changes, or?
>>>> If the SD is removed, then all event selections are already canceled
>>>> aren't they? If not, that seems like a broken protocol. Device IDs are
>>>> reused, so you might end up selecting for events from a different device
>>>> than you meant to.
>>>> Clients only are required to re-select when the specific slave device
>>>> they care about is attached, not on every hierarchy change.
>>> I guess daniel meant s/removed/reattached/, not as in "unplugged". But you
>>> answered the question, a client registering for touch events must re-select
>>> for touch events on every hierarchy change that affects the SD (including
>>> the race conditions this implies).
>>> What is the reason for this again? If we already require clients to track
>>> the SDs, can we assume that they want the events from the device as
>>> selected, even if reattached?
>> We enforce one touch client selection per physical device per window at
>> selection request time. Let's say on the same window you have client A
>> selecting on detached slave device S, and client B selecting on
>> XIAllMasterDevices. When you attach device S to a master device, you now
>> have two competing selections. Do you send touch events to client A or
>> client B? I feel that client B has priority and client A's selection
>> should be cancelled. If you inverted the priority, you would break X
>> core and XI 1.x clients by removing their selections without them knowing.
> can you even select for XIAllMasterDevices for touch events? master devices
> don't send touch events so you can't really select for them. Not sure how
> that situation would then happen.
> if you can, I need an extra blurb to see the semantics for
> XIAllMasterDevices on XISelectEvents.
I'm not sure where the confusion here lies. In ubuntu, qt selects on
XIAllMasterDevices for touch events and things work fine. I guess I'm
not sure what needs to be clarified.
More information about the xorg-devel