GLX and Xgl

Adam Jackson ajax at
Mon Apr 11 10:18:07 PDT 2005

On Monday 11 April 2005 12:33, David Reveman wrote:
> I've got GLX and indirect rendering working with Xgl. It's accelerated
> and works fine with Composite. There's of course a lot more work to be
> done but I don't plan on going much further until we're using
> framebuffer objects in Xgl as it would mean adding code that will be
> thrown away later.
> The glitz and Xgl code needed to get this working is in pretty good
> shape and it should land in CVS in a few days.

Way cool.

> But I had to do some pretty drastic changes to server side GLX code and
> I'm not sure that my current solutions are the best way to go. Here's
> what I've done:
> 1. Made glcore use MGL namespace. This allows me to always have software
> mesa available and this is currently necessary as there might not be
> enough resources to use the *real* GL stack with Composite. It might not
> be necessary when we're using framebuffer objects but I still think it's
> a good idea. This works fine when running Xgl on top of nvidia's GL
> stack or software mesa, but I haven't been able to get it running on top
> of mesa/DRI yet.

This is reasonable given that it's GLcore.  DRI drivers are better for this, 
they have their own dispatch table built in so you don't have to worry about 
namespace mangling.  I think all you'd have to do to make DRI drivers work is 
fill in glRenderTable{,EXT} from the driver's dispatch table.

> 2. Made all GL calls in server side GLX go through another dispatch
> table. Allows me to switch between software mesa and *real* GL stack as
> I like. This is also necessary as extension function pointers might be
> different between contexts and we need to wrap some GL calls. e.g.
> glViewport needs an offset.

Any function pointer you can query from glXGetProcAddress is explicitly 
context-independent.  From the spec:

#    * Are function pointers context-independent?
#        Yes. The pointer to an extension function can be used with any
#        context which supports the extension.

I'm not quite clear yet on how you decide whether to use the software or 
hardware paths.  Is it per context?  Per client?  Per drawable?

I think you'll have major issues trying to use two rendering engines at once.

> Both these changes are available as patches from here:
> xserver-mesa.diff also include some changes required to get xserver
> compiling with mesa CVS and a few lines to support ARGB visuals.
> xserver-glx.diff modifies files that seem to be auto generated but I
> didn't find the source to that so I just made the changes directly.

Most of the server-side GLX code was (at one point) autogenerated from some 
scripts at SGI.  We don't have those scripts though.

> I had to add a 8A8R8G8B pixel format to XMesa for ARGB visuals to work
> properly. This patch should do that:

This would actually be really cool to land on its own.

> The following is not working:
> - Context Sharing (all contexts are currently shared)
> - Drawing to front buffer
> - CopyPixels
> All contexts need to be shared inside Xgl so we're going to have to keep
> hash tables in Xgl to deal with GLX contexts.

Is this an artifact of using glitz, or is this something we'd see with other 
backends too?

> This is just what I believe is the best way to go, it's not in any way
> set in stone, it's all open for discussion. Comments and suggestions are
> of course much appreciated.

Sounds pretty sane, I'll think on it a bit.  Very nice work!

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <>

More information about the xorg mailing list