GSoC Proposal the second

Michal Suchanek hramrach at centrum.cz
Thu Mar 24 06:49:51 PDT 2011


Helllo,

On 24 March 2011 13:57,  <janikjaskolski at aol.com> wrote:
>
> Hello everyone,
>
> this time, slightly more specific, the second version of my proposal (thanks
> for the feedback marcoz & cnd)
>
> Based on increasing every day use of Convertible-Notebooks, Tablet-PCs and
> other touchscreen controlled devices, the need for more comprehensive
> control elements must be adressed.
>
> Even though Multitouchinteraction and Interactive Surfaces already present
> control elements, the usability is not comparable to the variety of Mouse /
> Keyboard / etc. input possibilities. A user is very limited in the way he or
> she can interact with the machine, once the only input device is a touch
> sensitive surface.
>
> Those limitations may be lifted through the welcoming of additional sensory
> input and connecting that input to already registered touch triggered
> events.
> I currently work on a bachelor thesis, which incorporates microphone feeds
> using a c standard audio lib and reduces that input to a fast queryable
> format to enable the correlation with xf86-input-evdev events. I would
> regard that thesis as a prototype to the possible GSoC project.
>
> As the GSoC project, I could work on writing a driver, that emulates the
> entire functionality of a 5-7 button mouse for touchscreens for starters.
> The triggering actions would be various combinations of tapping / scratching
> / knocking on the screen.
>
> For example:
> Knocking the screen once would translate to a right mouse click.
> Knocking it twice and or three times could be the fourth and fifth button
> (which would be very useful to f.e. navigate inside a browser).

For multitouch tablets this is already well covered by multitouch gestures.

For single-touch devices some mechanism could be handy.

Multiple clicks/knocks aren't very useful. The distinction between
click and doubleclick (multiclick) is usually reasonably clear but it
is easy to confuse different multiclicks (2, 3, more) due to user
error or input analysis error.

However, if you can detect the user moving their finger towards or
away from the microphone over a surface already present on the device
or added for the purpose you could get additional input that could be
used for scrolling or zooming.

The problem with adding other clicks is that touchdown is the only way
to move the pointer but also a click already. In absence of multitouch
adding a right-click is challenging. Moving the pointer already causes
left-click. Additional button or other input can technically produce a
right-click but you would not get any coordinates associated with it
so it's not very useful.

Apple chose to emulate right click with long click (button down and
hold without moving the pointer) or using a modifier (eg. to generate
a right click you hold down another button and then left click) which
are probably the only reasonable options.

The other option is to redesign the interface to not use a right-click
(and middle click) at all.

This is not so difficult with touch interfaces as using controls
spread across the screen on various toolbars is cheap. With a mouse
you have to move the pointer and precisely align it with the control
when the control is far from current pointer position which is not an
issue on a touchscreen.

>
> Futhermore, something interesting could be scratching the screen in a
> discrete area, which would trigger the key events alt + f4, which would kill
> the currently active window.
> Going in that direction, the only limitation is finding enough
> distinguishable combinations of touching and sounds to emulate useful
> control key events.

I don't think scratching the screen is distinguishable from just
moving the pointer.

Under less than ideal conditions using the stylus results in various
odd sounds without any intent, and background noise would likely
interfere with distinguishing less prominent sound variations.

>
> These are some examples of what is possible through audio analysis and spike
> detection.
> My intention would be, that the driver is build in such a way, that it does
> not matter what sensory input the results of the input analysis come from.
> If its audio or video or any other device. I would supply an API that makes
> future extensions in these directions as easy as possible.
>
>
> The GSoC project in short:
> - evdev extension to expand touchscreen control
>     - covering the full functionality of a 5-7 button mouse

This should already work for multitouch, at least on Wacom and
Synaptics devices.

Maybe the generic evdev driver is less capable. I know no device with
enough capabilities for implementing this which could be driven by
evdev.

Thanks

Michal

PS your email is written in HTML with broken formatting.
Please fix your email client.


More information about the xorg-devel mailing list