Xgl and direct rendering

Andy Ritger aritger at nvidia.com
Wed Mar 1 09:22:14 PST 2006



On Wed, 1 Mar 2006, Matthias Hopf wrote:

> On Feb 28, 06 19:57:29 +0100, Anders Storsveen wrote:
> > what exactly is an overlay, I've heard about Hardware Overlay, Video
> > Overay, Opengl Overlay, and more? what is it?
> 
> Hardware Overlay is typically only handwaving for video overlay. It
> means there is a second scanout engine, that works independently on a
> second framebuffer with a different resolution and color space
> (typically YUV for XVideo), and a color key in the primary framebuffer
> decides whether the primary or this secondary framebuffer should be
> visible at any given pixel.
> 
> The secondary scanout engine has to perform color conversion and scaling
> on-the-fly, and typically has very well designed 7tap filters for
> scaling, which are better than bilinear filters (which are the default
> for textures).
> 
> OpenGL overlay is something similar for OpenGL, and it hurts my brain
> when I think of it. It is something used for user interfaces in ancient
> OpenGL applications, and should never ever be used in modern
> applications. Use imposters for your scene graph here, if rendering the
> main model view is too slow for immedeate update.

Matthias, I have to disagree with you on this point (though I agree
with everything else discussed in this thread).

Use of an overlay for OpenGL is not limited to ancient applications.
There are modern workstation OpenGL applications that utilize an
overlay, and it makes a lot of sense for them to do so.

If the content in the scene and the content in your user interface
are changing at different rates (typically ui elements changing more
quickly during user interaction), then an overlay is a very efficient
means to composite both together without needing to re-render both
(the compositing of main plane and overlay plane is done "for free"
by scanout).

Using GPU progammability to perform the scene+ui compositing
can be done, but is not necessarily as effective: you could
render your scene to one buffer, your ui to another buffer, and
composite the two together into a third buffer (to be scanned out)
whenever either changes.  The downside is that you may end up doing
that compositing more often than necessary.  Allowing scanout to
"naturally" composite the overlay with the main plane means that
compositing happens exactly the right number of times: as often
as the pixels are read from vidmem by the scanout engine.  Plus,
scanout compositing does not require a third buffer to store the
composited result (vidmem might be cheap, but if you're driving a
pair of Dell 30" panels, that third buffer will cost you).

Thanks,
- Andy


> Matthias
> 
> -- 
> Matthias Hopf <mhopf at suse.de>       __        __   __
> Maxfeldstr. 5 / 90409 Nuernberg    (_   | |  (_   |__         mat at mshopf.de
> Phone +49-911-74053-715            __)  |_|  __)  |__  labs   www.mshopf.de
> _______________________________________________
> xorg mailing list
> xorg at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/xorg
> 



More information about the xorg mailing list