Enabling multitouch in input-evdev
Benjamin Tissoires
tissoire at cena.fr
Fri Jan 8 03:55:21 PST 2010
Dear all,
as I mentioned in my previous mail, I manage to patch the input-evdev
driver to handle multitouch. In this mail, I will try to summarize the
approach, the pros and cons, and the discussions I had with kernel
developpers (mostly Stéphane Chatty) and toolkits developers (from nokia).
So, the first part: my approach.
I patched the evdev input driver considering the fact that one day or an
other, toolkits will(should) have to take into account the MPX part.
That's why I choose to implement multitouch with XInput2. I took the
idea of Ryan Huffman and his TUIO input driver. The driver has to create
subdevices (virtual) to send the different touch events.
First, I try to dynamically create/detroy such subdevices. However, it
infers very often freezes in the Xserver due to some mallocs. So the
idea to solve this problem is to staticaly allocate the subdevices. This
let the problem of the starting/ending events of the track. I arbitrary
choose to implement it in the "Evdev Tracking ID" property. When the ID
is -1, there is no touch event, and when it's > 0, there is a track.
That's let us 2 problems: how can we handle gestures, and should we
create masters devices in the server part. The commonly accepted
solution is to put it all in the client (or toolkit) part. That's why,
for the demo I wrote a small tool that handle the masters devices, their
grabbing, and their attachments. I did not had the time to implement
gestures, but I thought we had enough informations with the subdevices
to do the job.
Finally, let's talk about the internal detection and behavior of multitouch.
The multitouch-enabled evdev driver (since kernel 2.6.30) has some more
events (SYN_MT_REPORT, ABS_MT_TOUCH_MAJOR, ABS_MT_TOUCH_MINOR,
ABS_MT_WIDTH_MAJOR, ABS_MT_WIDTH_MINOR, ABS_MT_ORIENTATION,
ABS_MT_POSITION_X, ABS_MT_POSITION_Y, ABS_MT_TOOL_TYPE, ABS_MT_BLOB_ID
and ABS_MT_TRACKING_ID). The only necessary event given by a multitouch
device (correct me if I'm wrong) is the SYN_MT_REPORT. So I activate
multitouch only when I received this event.
What I mean is the problem of the detection of valuators in the Xorg
part. From my understanding, valuators 0 and 1 are necessary x and y.
But the evdev driver bases its detection of axes of the order given by
the kernel evdev. If a driver does not deliver ABS_X and ABS_Y but only
MT-events, the first two axis will be ABS_MT_TOUCH_MAJOR,
ABS_MT_TOUCH_MINOR (the dimensions of the touch), that is incompatible
with Xorg. So I asked Stéphane Chatty to keep ABS_X, ABS_Y in the
Stantum driver, and it seems that it has been accepted by the other
kernel developers.
So now, with multitouch-enabled devices, we should have the touchscreen
emulation (ABS_X, ABS_Y) and the multitouch part. This allows us (in the
xorg part) to choose between this two modes. By default, I enabled
touchscreen emulation: ie the driver does not do anything more than
before. If we (the client part) want to use multitouch, we can change
the property "Evdev Multitouch" with the number of desired touches
recognitions. I limited it to 5 (MAX_VALUATORS_MT) in the patch as the
creation of masters devices and such subdevices produces too many
devices in standard (limited to 40 in the current trunk). We can still
revert to touchscreen emulation by setting "Evdev Multitouch" at 0.
I think I summarize my patch, now let's talk about the discussions.
Here I quote Bradley T. Hughes:
(...)
Multiple cursors is definitely a nice feedback to the user, but from an
application and toolkit perspective, it's much nicer to receive all
information in one event (which the Linux kernel does as well). I wonder
if there is a way to get both. Applications and gesture recognizers are
much easier to write if the event contains information about all touch
points (instead of every app having to write a state machine to track
all touch points). It's more CPU and power efficient too. The number of
(potential) process context switches needed to wake up an application
for each touch point vs. a single context switch (from Xorg to the app)
for any kind of multi-touch event is a strong argument here at Nokia
:) Less context switches means better battery life, and lower feedback
latency for the user.
For what it's worth. Gesture recognition can be done many different
ways, and we have added support for gestures in Qt as well (which build
upon our QTouchEvent, which can deliver multiple touch points in each
event).
I don't want to sound like I'm saying that "my way" is the Right Way. I
just want to make the argument based on the experiences that we had
while adding multi-touch support to Qt. I hope you can understand them
and are willing to investigate the possibility. I am willing to help as
well... I have a Dell Latitude XT laptop with an N-Trig screen and can
help out where needed. I've never done X driver development though, so
it may take me a little time to get up to speed, though.
(...)
I don't suppose you have considered trying to find a way of adding all
the touch point information into a single event using multiple axes
(this is certainly more efficient and easier to handle on the Qt side of
things). Windows 7 and Mac OS X do this, as well as Qt.
(...)
<end of quotation>
So, first question: is my behavior the good one? (not being compliant
with Windows or MacOS)
to begin the answer, I can tell the pros and cons of my way:
Pros:
* MPX related so anyone can simulate gestures with two mice...
* More fun as we can easy have one cursor per touch (we can do that with
the other solution but by sending Xtest events)
* Easier to develop as I already made the patch ;-)
Cons:
* More complicated for the toolkits as they will have a different
behavior between the different systems
* More costly as the client part has to reassemble the parts to
detect/treat gestures
The second problem was concerning trackpads:
How can we handle modern-multitouch trackpads (Broadcom 5974,
DiamondTouch). We excluded synaptics trackpads as they don't send ABS_MT
events but special tools events (325, 330, 333, 334 for ToolFinger,
Touch, Tool Doubletap, Tool Tripletap).
From Stéphane's point of view, they should be transparent of my patched
version of input-evdev as they are giving quite the same events than
multitouch screens.
Finally, I was wondering what is your position concerning the use of
XI-properties to notify the client of the starting/ending of touch events.
I hope I did not bother you with all my questions but I think we could
have an interesting discussion here.
Cheers,
Benjamin
More information about the xorg-devel
mailing list