[Xorg] Damage/Composite + direct rendering clients

Andy Ritger aritger at nvidia.com
Mon May 17 12:54:53 PDT 2004



On Mon, 17 May 2004, Keith Packard wrote:

> 
> Around 11 o'clock on May 17, Andy Ritger wrote:
> 
> > How should a direct rendering client interact with Damage/Composite?
> > There seem to be two pieces to this: damage notification, and
> > synchronization.
> 
> Thanks for getting this topic started.
> 
> > When a direct rendering client damages the X screen, it needs to
> > communicate that information to the X server so that the X server can
> > notify Damage clients of the damage.
> 
> We can easily send damage data over the wire if you like; that would 
> require the active participation of the direct-rendering client.
> 
> You can do that today easily enough -- use XClearArea after setting the 
> window background to None (and perhaps back again when you're done).  Oh 
> wait, that doesn't actually work right now -- I've got a kludge which 
> ignores background None painting to windows.  I need to fix that anyway to 
> handle mapping of background None windows cleanly.
> 
> Alternatively, we can use the existing DamageDamageRegion function which 
> is already exposed within the server to mark regions from the direct 
> rendering clients as they modify the window pixmap.

OK, I've honestly not looked at the implementation in X.org, yet.
DamageDamageRegion sounds like exactly what we would need.

> >    1) client kicks off rendering, notifies X server of damage,
> >       X server sends Damage event to composite manager, composite
> >       manager sends compositing request back to server, server
> >       performs composite.  There needs to be some synchronization to
> >       guarantee that the composite is not performed until the client
> >       rendering is completed by the hardware.
> 
> Given that most applications double buffer their output, this seems like a
> pretty well constrainted problem.  The only request which can affect the
> front buffer is a buffer swap, and that modifies the entire window 
> contents.

Right: swaps and front buffered flushes are the only GLX operations
that should trigger a damage event.  Even for front buffered flushes
I would be inclined to just say that it damages the whole drawable,
rather than try to compute a smaller bounding region.


> So, the X server must be able to find out when the buffer swap 
> actually occurs, and either be signalled or block until that point.

The tricky part here is that the damage event shouldn't be sent to
Damage clients until the hardware has completed the damage, but
that is the vendor's problem... I'm just trying to make sure
everything that is needed is in place so that vendors can solve that.

One solution would be for the direct rendering client to send private
protocol to the X server as soon as the rendering is sent to the hw.
The X server then sends a damage event to the Damage clients.
The composite manager then starts performing a composite.  Ideally,
you would defer waiting for the hw to complete the direct rendering
operation until the composite manager wants to perform the composite.
BeginComposite/EndComposite bracketing would facilitate that (it
would be BeginComposite's job to make sure the hw had completed).

> >   2) some damage occurs, composite manager sends composite request,
> >      additional rendering is performed, part of which the composite
> >      operation picks up, but the rest of the rendering is not
> >      composited until the next "frame" of the composite manager,
> >      and we see visible tearing.
> 
> Applications which wish to avoid tearing must double buffer their output, 
> just as they do today.  Once that is true, then there is no 'partial' 
> rendering, the buffer swap damages the entire window and replaces all of 
> the contents.

Sorry, I wasn't clear here.  Allow me to clarify with an example:

    glxgears is partially overlapped by a translucent xterm:

                    _____________
                    |           |
        ____________|....       |
        |           |   .       |
        |  glxgears |   xterm   |
        |           |   .       |
        |___________|....       |
                    |___________|
                    
    The xterm updates (say, it scrolls) and a damage event is sent
    to the composite manager.  The composite manager drains the
    event queue and builds the list of damaged regions.  As far as
    the composite manager knows, glxgears has not been damaged.

    The composite manager then sends protocol to recomposite the
    xterm; presumably this operation would also use as a source
    the portion of the glxgears window beneath the xterm.

    glxgears is then redrawn (and swapped) before the compositing
    is performed.  When the compositing is performed, the xterm
    and the portion of the glxgears window beneath the xterm are
    recomposited into the backing pixmap, which is then blitted to
    the visible screen.  At this point, we have a tear between the
    portion of the glxgears window that is not beneath the xterm
    and the part that is (the part that is beneath the xterm is
    from glxgear's new frame, while the part not beneath the xterm
    is from the old frame).

    The composite manager then returns to its event loop, receives
    notification that glxgears was damaged, and eventually updates
    the screen with the change.

    In the period between these two composite "frames", glxgears
    is torn vertically along the xterm boundary.

    Again, this should not be specific to direct rendering: it could
    just as easily happen with an animating 2d app.  The race is
    that rendering (either direct or indirect) can occur between
    when the composite manager builds its damage list, and when
    its compositing requests are processed by the server.

The only sure fire solution that I can think of is for the composite
manager to grab the server, drain its event queue, perform its
compositing, and then ungrab the server.  That seems very heavy
weight, though, so I'm curious what other solutions people might have.

One compromise would be to introduce new BeginComposite/EndComposite
commands, and get composite managers into the habit of using
them now.  This would give vendors the flexibility to synchronize
this in whatever way makes most sense for their architecture.


> A more efficient implementation could actually perform this buffer swap 
> without copying any data around -- just flip the off-screen storage for 
> front/back buffers.  That's probably easier with GL than 2D apps which 
> tend to create window-sized pixmaps for 'back buffers', leaving the 
> semantic mismatch between copy and swap.

Yes, that is a good idea, though it would mean a round trip (the
client would need to wait for the server to update it's state of
front/back before the client could start rendering to the new back).


> > Perhaps the best solution is to introduce two new requests to the
> > Composite extension: a "BeginComposite" and an "EndComposite" that
> > composite managers would call, bracketing their compositing requests.
> 
> I don't think this is necessary -- the X server receives the damage 
> information related to a specific drawable.  Any future requests for 
> contents from that drawable must delay until that damage has actually 
> occurred.

Right, but how is that enforced?  Who delays until the damage has
actually occurred?

> >   1) Truly double buffer the compositing system.  Keith's sample
> >      xcompmgr double buffers the compositing by creating a pixmap the
> >      size of the root window, compositing into that, and then after
> >      each frame of compositing is complete, copying from the pixmap
> >      to the visible X screen (is that accurate, Keith?)
> 
> I don't think we can avoid doing this; one of the primary goals of the 
> system is to provide a clean tear-free user experience, so all screen 
> updates must be performed under double buffering.

Agreed.

> >      I can't make a strong argument for it, but if instead a back
> >      buffer for the root window were automatically allocated when a
> >      composite manager started redirecting windows, and compositing
> >      was done into that buffer, then this might allow for various
> >      minor optimizations:
> 
> A GL-based compositing manager could easily do this.  And, an X-based 
> compositing manager could use the double buffering extension if it wanted 
> to.  My tiny kdrive based server doesn't happen to include that extension.

OK, I'll need to learn more about DBE before I can comment on that.


> >    2) An actual fullscreen mode.  This is admittedly orthogonal
> >       to compositing, but the overhead of compositing suggests that
> >       we should have a mode of operation that clients can request
> >       where they are given exclusive access to the hardware,
> >       bypassing the compositing system.
> 
> The compositing manager could recognise this case automatically if it were 
> coupled with the window manager a bit more.

True, but window managers can't cause video memory to be freed,
which would be really nice to do when you are transitioning into a
fullscreen application.  Even the RandR implementation naively leaves
the video memory allocated for the largest possible root window size.


> > - It is important that X.org maintain a binary compatible driver
> >   interface, so that vendors are not required to provide multiple
> >   driver binaries (how to determine which binary to install? etc...)
> 
> Absolutely.  The Composite extension is being integrated in a completely 
> binary compatible fashion.  If any changes are required in the future, 
> we'll have long lead times and cross-version compatibility to deal with at 
> that point.

Excellent; I just wanted to reinforce the importance of this from
an IHV point of view.


> > - An X driver should be able to wrap the redirection of windows to
> >   offscreen storage:
> 
> It already can -- per-window pixmaps are created and the driver notified 
> before any rendering occurs; a clever driver could migrate those pixmaps 
> to special offscreen storage if it wanted to.

OK; how does a driver differentiate the per-window pixmaps from
regular pixmaps?


> > - An X driver should be able to call into the core X server to
> >   notify X of damage done by direct rendering clients.
> 
> See DamageDamageRegion

Very good.

 
> > - A Video Overlay Xv Adaptor is obviously fundamentally incompatible
> >   with Damage/Composite.  Should X drivers no longer advertise
> >   Video Overlay Xv adaptors if they are running in an X server that
> >   includes Composite support?
> 
> Actually, as long as the windows are aligned on the screen with their 
> nominal position and are opaque, this works just fine.
> 
> However, when the windows are not so carefully positioned, the system will 
> need to use a YUV texture to paint the video contents into the window 
> pixmap and damage the region so the compositing manager can update the 
> screen as appropriate.

The problem is that Xv works in terms of "ports" -- a driver
advertises an overlay port, a blitter port, a texture port, etc.
The ports are advertised for the life of the X server (like visuals);
my understanding is you can't dynamically add/remove Xv ports or
migrate one into another while in use.  So if the X server might
start compositing, then the driver can't advertise the overlay port;
is that correct?


> > - As window managers and desktop environments start folding composite
> >   manager functionality into their projects, it would be nice
> >   for them to provide a way to dynamically disable/enable
> >   compositing.
> 
> Yeah, I often turn off the compositing manager when doing 'odd' things.  

Sure.


Thanks,
- Andy

> -keith
> 
> 
> 




More information about the xorg mailing list