[RFC XI 2.1 - inputproto] Various fixes in response to Peter Hutterer's review

Peter Hutterer peter.hutterer at who-t.net
Mon Nov 29 20:03:32 PST 2010


On Mon, Nov 29, 2010 at 04:07:24PM -0500, Chase Douglas wrote:
> On 11/29/2010 02:08 AM, Peter Hutterer wrote:
> > On Tue, Nov 23, 2010 at 09:27:53AM -0500, Chase Douglas wrote:
> >> On 11/23/2010 01:07 AM, Peter Hutterer wrote:
> >>> On Fri, Nov 19, 2010 at 01:52:39PM -0500, Chase Douglas wrote:
> >> If we send events through the master device, we have to handle DCEs as
> >> well. Two separate dependent touch devices may be attached to the same
> >> master device. I'm trying real hard to not have to deal with DCEs for MT
> >> devices :). Not only is it more surface area in the protocol to
> >> implement, it presents more opportunity for implementation or protocol bugs.
> >>
> >> Part of the purpose of master devices is to coalesce pointer motion from
> >> multiple devices into one cursor on screen. The cursor on screen has the
> >> same boundaries and behavior across all attached devices. There's no MT
> >> analog to relative devices so I'll leave those aside. Absolute devices
> >> are transformed from device coordinates to screen coordinates. I don't
> >> believe dependent touch devices should be mapped to screen coordinates;
> >> if you want such behavior, make the device behave as a direct device. So
> >> if dependent touch devices don't move the cursor by themselves, and they
> >> have different properties such as resolution and limits, what do we gain
> >> by sending them through the same master device?
> > 
> > a few comments here:
> > x/y is mapped to screen coordinates for direct devices but the
> > original value is still available to clients. for dependent touch, you still
> > need to provide the focus point (i.e. x/y of the cursor) in screen
> > coordinates as well.
> 
> My implementation does this. For both modes of devices, the root and
> event coordinates of the DeviceEvent are given in screen coordinates.
> The X and Y touch valuators are given in device coordinates. Direct
> touch device root and event coordinates are derived from the X and Y
> touch values. Dependent touch device root and event coordinates are
> copied from the attached master pointer position.
> 
> I believe this meets all needs, does it break anything if the event is
> built this way?

that's the correct data, yes.

> > master devices are a multiplexer device, you're right here. what they also
> > provide is a defined pairing of pointer and keyboard devices. so use the
> > example of a desktop with a built-in touchscreen. whether I use my finger to
> > click somewhere or the mouse shouldn't matter, the pointer follows both as a
> > cursor and thus controls keyboard foci as well.
> > of course, in a multi-user setup, the need for a defined pairing is even
> > higher. so we have to attach any device to a MD pointer anyway at which
> > point maintaining the hierarchy in the events not only provides consistency,
> > but also a sequence on how the events occured if multiple slave devices are
> > in use at the same time.
> 
> This seems to be an argument for having touch devices participate in the
> device hierarchy. I have no issues with this. This doesn't require
> sending touch events through the master pointing device though.

once you have the device in the hierarchy, it'll be hard _not_ to send
through the master device. we're sending single-touch events from
traditional touchscreens through the master device, though you could argue
that for these events (or absolute events in general) the abstraction is not
needed.

we'd be sending dependent device touches through the master device, so just
by switching the mode the event delivery would change. this is just
confusing, imo. non-touch events from the same device would still go through
the MD, so unless you monitor the SD for these events (but ignore them,
because logical button state is largely on the master) you'd lose
serialisation. you couldn't tell whether a button event happened before or
after a touch event, at least not if you hit the time resolution.

the argument you brought up in your last email is that DCEs are hard. That's
not really a technical reason against sending events through the master
though.

> > also, you say that dependent touch devices don't move the cursor by
> > themselves - that is not completely true, isn't it? it depends on the
> > implementation, a touchpad still moves the cursor even if it supports MT and
> > it could instruct the server to emulate pointer events for part of the MT
> > events.
> 
> When I referred to dependent touch devices here, I meant the literal
> touch class of the device. You're right that the device may have a
> general valuator axis device for pointing, and that device class will
> provide for single pointer emulation.
> 
> My point was that by not sending touch events through the MD we erect a
> clean barrier between pointer emulation and multitouch events.

we have this barrier through the device flags. the DCE tells us which device
is now sending events, and the pointer event has the flag set. i'm not sure
why we need more barriers.
 
Cheers,
  Peter


More information about the xorg-devel mailing list