GLX and Xgl

Matthias Hopf mhopf at suse.de
Wed Apr 13 07:38:55 PDT 2005


On Apr 12, 05 14:59:07 -0400, Owen Taylor wrote:
> On Tue, 2005-04-12 at 17:49 +0200, Matthias Hopf wrote:
> > So how can we make - in the long term - make direct rendering with Xgl
> > possible? So far I think we basically need
> > 
> > - EXT_framebuffer_object for rendering X requests into a texture in
> >   the server
> > - some extension yet to be specified, which allows sharing of textures
> >   between processes (Xgl and application)
> 
> I think it is important to note that this isn't exactly arbitrary
> peer-to-peer sharing of textures; the setup of the shared "textures"
> (really render targets) is always part of the server, and the server 

Yes, I didn't specify concrete details by intention.

> is special in the  GLX protocol. In the simplest model, the differences
> from how the DRI works is:

I've never done much work with DRI, so I'm not influenced that much from
that direction (I used to work a lot with SGIs).

> In the DRI/Egl/Xgl world, it clearly is a fairly different problem,
> but still doesn't seem essentially different from the problem of
> non-redirected direct rendering. The server has to tell the clients
> where to render in memory, and there must be locking so that the
> client doesn't render to memory that is being used for something
> else.

I guess I have to dig a bit into GLX code and read the specs more
thorowly. Right now there is no notion of a memory pointer to be
rendered to in OpenGL. So we might need an extension to get these
low-level rendering parameters from the OpenGL layer in order to
implement the GLX rendering context negotiation / redirection completely
in user space (which we have to, because we no longer have access to low
level routines like regular Xservers have).

> One obvious hard problem is framebuffer memory exhaustion ... nothing
> prevents an application from just creating more and more GL windows,
> and that would require more and more video memory given independent
> backbuffers. You might need a framebuffer ejection mechanism much like
> the current texture ejection mechanism, except that it's more
> complex ... restoring the framebuffer requires cooperation between the
> ejector and the ejectee.

Agreed.
AFAIR 3Dlabs had MMIO on their chips which could easily deal with this
problem, but neither NVidia nor ATI have something like this or even
plan to implement it AFAIK.

> > - ARB_render_texture to create a context on the application side that
> >   renders into a texture
> 
> To the client it must look precisely as if they are rendering to a
> window. No client-exposed extension can be involved.

That should be the plan. 
I wanted to read the GLX specs more thorowly for the bytestream protocol
to initiate direct rendering, however, I couldn't find anything related
to that. Do you know whether this part is vendor specific?
Guess I have to read the Mesa sources.

> > One alternative would be another extension that would allow the
> > transport of one context to another process, so the context for
> > rendering into a texture could be created on the Xgl side, and the
> > context could then be transferred to the application side. This sounds
> > scary as well. I doubt that an extension for shared contextes would work
> > without patching the application side libGL, either.
> 
> Hmm, sounds like the hard way to do things. I'd think a GLcontext is a
> much more complex object than "there is a framebuffer at this
> address in video memory with this fbconfig"

Yes it is. That's what makes me quite a bit uncomfortable.

CU

Matthias

-- 
Matthias Hopf <mhopf at suse.de>       __        __   __
Maxfeldstr. 5 / 90409 Nuernberg    (_   | |  (_   |__         mat at mshopf.de
Phone +49-911-74053-715            __)  |_|  __)  |__  labs   www.mshopf.de



More information about the xorg mailing list