[RFC] xserver: Masked valuators, DIDs, and ABS_MT_SLOT

Peter Hutterer peter.hutterer at who-t.net
Mon Jul 5 17:16:10 PDT 2010


On Fri, Jul 02, 2010 at 03:02:07PM +0200, Henrik Rydberg wrote:
> Hi Chase,
> [...]
> >> The rangeToMask allocates memory in the inner event loop...
> >>
> >> The whole mapping construction seems a bit backwards. If unused valuators are
> >> never referenced, there is no need to do all those extra copies. As a side effect,
> >>
> >> *EventsM(events, pDev, type, key_code, mask, num_valuators, all_valuators);
> >>
> >> could be implemented like
> >>
> >> *EventsM(events, pDev, type, key_code, mask, num_valuators + first_valuator,
> >> valuators - first_valuator);
> > 
> > My thought was to keep the API simple and in line with previous
> > functions. Thus, the bitmask and the valuators arrays start at the 0th
> > valuator index of the device.
> > 
> > To get around doing any copying when *Events functions are called, we
> > could either duplicate the code so that we don't send *Events calls
> > through *EventsM, or we could change the *EventsM valuators argument
> > meaning: instead of being an array starting at the 0th valuator, it
> > would start at the first valid valuator in the bitmask.
> > 
> > Though not as simple in theory, it's not that complicated, so I'll just
> > change the meaning of the valuators argument and get rid of the copying.
> 
> But it does complicate things a bit, doesn't it? Peter, how much is that
> first_valuator actually used, i.e., different from zero? Perhaps one could
> simply change the api higher up as well, and get rid of the problem altogether.
> Maybe this was implicit in Peter's response?

i think only evdev ever sets first_valuator to non-null and that caused a
few bugs when it was introduced :) at least in the event generation state,
the processing stage needs to cope with it for DeviceValuator events.

Cheers,
  Peter

> >> The bit mask is memory efficient, why not allocate in on the stack? It would
> >> certainly be a lot faster.
> > 
> > Yeah, that makes more sense. I'll update the code.
> 
> Good good. All in all, this is great work, and we are much farther down a useful
> route now than a couple of weeks ago.


More information about the xorg-devel mailing list