RandR 1.4 restart

Pauli Nieminen suokkos at gmail.com
Wed Mar 9 01:15:36 PST 2011


On Tue, Mar 8, 2011 at 4:13 PM, Soeren Sandmann <sandmann at cs.au.dk> wrote:
> James Jones <jajones at nvidia.com> writes:
>
>> Interesting.  I'm not really sold on the switch to InputOnly windows though.
>> It sounds very elegant in theory, but I don't like the idea of apps needing to
>> use a separate codepath depending on whether a composite manager is running,
>> and I'm not convinced it doesn't have other side effects.  I'll definitely
>> have to give it some more thought though.
>
> It will require changes to the applications, yes. For GTK+ I think the
> main complication is the fact applications can create subwindows, and
> then ask for an XID for those subwindows. These applications will expect
> that XID to be an InputOutput window, which would be impossible if the
> toplevel was an InputOnly.
>
> This issue is not insurmountable though, since getting the XID for a
> GTK+ window is a rarely-used feature and increasingly considered
> somewhat deprecated.
>
>
> Soren

I think you are trying to create way too complex solution to simple
problem. Problem has been already solved in 80s by using double
buffering and swapping the buffers. Actually GL way of defining that
everything should look to application that swap buffers happened
immediately even tough it might take many milliseconds to complete the
swap. This kind of pipelining of all operations will make it a lot
easier to make everything flicker and tear free if driver is just
written correctly.

Then all modifications to windows, mode etc should be defined to take
effect on the next swap buffers. To make think easier in some cases
some operations might be defined to cause implicit swap buffers. For
example good definition for XCompositeUnredirectWindow(window
client_window, pixmap root/scanout, int mode) would be "causes
<pixmap> to swap to <window> front buffer and any subsequent swap
buffers to <window> will swap <window> back buffer to <pixmap>"

To support legacy  applications they would have to render to back
buffer or "fake front". Buffer swap would be then implementation
detail that can happen based on timer or damaged area or what ever
heuristics makes legacy application look fairly good.

Pauli


More information about the xorg-devel mailing list