Fixing effective touch point position inside the touched area

Chase Douglas chase.douglas at canonical.com
Tue Sep 7 07:35:25 PDT 2010


On Tue, 2010-09-07 at 14:05 +0200, Denis Dzyubenko wrote:
> Hello guys,

Hi Denis,

> While looking through the drafts on the XInput2.1 specification as
> published by Daniel Stone and Peter Hutterer, I've noticed one thing
> that might be missing there.
> 
> Touching a screen with a finger usually produces quite a big area of
> touch and the problem raises is what is the effective touch point
> inside the area - as I can see it supposed to be somewhere in the top
> part of the touched area - usually when a user touches the screen with
> a thumb, he means to interact with an item a bit above it - however
> that position depends on the screen orientation. So _someone_ is
> supposed to add an offset to the area of touch that is reported to the
> windowing system - otherwise it will be really hard for the user to
> interact with items on the screen - for example touching the top part
> of a window will be almost impossible.

It's up to the hardware to produce a set of coordinates for a touch.
Some hardware also provides details such as the size of a touch, either
in a general form or as two axes of an ellipse. I believe the hardware
makers calibrate their touchscreens such that that touch coordinates are
appropriate for finger touch interaction, so I don't think this is a
huge issue for the X layer to deal with. X does have some extra
calibration mechanisms if needed, though.

> This is also a problem since we want to support legacy applications
> that doesn't know about touch (XInput2.1) events and only handle core
> pointer events that are resulted from them.
> However the driver can only specify which one of the touch sequences
> should be emulated as a pointer event, but cannot specify the
> effective touch position because the screen might be rotated and I
> assume that the driver doesn't have that information.
> 
> The easiest solution that I can see is to add additional offsetX and
> offsetY axises that the X will fill in depending on the screen
> resolution and dpi and those values can be used by both X to detect
> the target window in the touched area and by clients to use it as the
> interaction point if required.

The X input module passes raw events on to the X server, where the
events should be translated to the proper location depending on the
screen resolution and orientation.

> The only problem left is if a legacy client only handles core events
> and the client is transformed - for example I know that meego doesn't
> use xrandr for screen rotation (for various reasons), but instead
> individual applications receive a "screen orientation change" event
> and decide if the app should be rotated or not. In that case the core
> pointer event will be delivered to a wrong location. But I am not sure
> if we should handle that at all.

That does sound like a Meego specific construct. I'm guessing they are
handling all the translation inside the Qt toolkit. We shouldn't be
modifying X in this case because Qt will expect X to behave as if it's
unaware of the orientation change.

Thanks,

-- Chase



More information about the xorg-devel mailing list