GLX and Xgl

Owen Taylor otaylor at redhat.com
Tue Apr 12 11:59:07 PDT 2005


On Tue, 2005-04-12 at 17:49 +0200, Matthias Hopf wrote:

> So how can we make - in the long term - make direct rendering with Xgl
> possible? So far I think we basically need
> 
> - EXT_framebuffer_object for rendering X requests into a texture in
>   the server
> - some extension yet to be specified, which allows sharing of textures
>   between processes (Xgl and application)

I think it is important to note that this isn't exactly arbitrary
peer-to-peer sharing of textures; the setup of the shared "textures"
(really render targets) is always part of the server, and the server 
is special in the  GLX protocol. In the simplest model, the differences
from how the DRI works is:

 - There is a separate front-buffer and back-buffer per window
   that need to be communicated
 - Instead of clip list changes, we get changes to the address
   (and size) of the front and back buffers.

Really, in the DRI/GLX world, it seems very much non-intimidating 
to me. (Except for details and fixing all the drivers)

In the DRI/Egl/Xgl world, it clearly is a fairly different problem,
but still doesn't seem essentially different from the problem of
non-redirected direct rendering. The server has to tell the clients
where to render in memory, and there must be locking so that the
client doesn't render to memory that is being used for something
else.

One obvious hard problem is framebuffer memory exhaustion ... nothing
prevents an application from just creating more and more GL windows,
and that would require more and more video memory given independent
backbuffers. You might need a framebuffer ejection mechanism much like
the current texture ejection mechanism, except that it's more
complex ... restoring the framebuffer requires cooperation between the
ejector and the ejectee.

> - ARB_render_texture to create a context on the application side that
>   renders into a texture

To the client it must look precisely as if they are rendering to a
window. No client-exposed extension can be involved.

> Then there's still the issue that the libGL on the application side
> would have to create a context that renders into a texture, without the
> application actually noticing it. This is currently what scares me most.
> 
> One alternative would be another extension that would allow the
> transport of one context to another process, so the context for
> rendering into a texture could be created on the Xgl side, and the
> context could then be transferred to the application side. This sounds
> scary as well. I doubt that an extension for shared contextes would work
> without patching the application side libGL, either.

Hmm, sounds like the hard way to do things. I'd think a GLcontext is a
much more complex object than "there is a framebuffer at this
address in video memory with this fbconfig"

> Any other ideas? What else did I forget?
> 
> For the non-redirected case, an application could render into its own
> OpenGL context, but this would make occlusion, window stacking, etc.
> a nightmare.

The server actually already has infrastructure to just redirect any
arbitrary window and then automatically composite without an explicit
CM... this is used for doing windows of mismatched depth. Efficiency
probably prohibits doing this for GL windows, however.

Regards,
					Owen

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.x.org/archives/xorg/attachments/20050412/86eb65d8/attachment.pgp>


More information about the xorg mailing list