[Mesa3d-dev] Re: GLX and Xgl

David Reveman davidr at novell.com
Tue Apr 12 01:03:39 PDT 2005


On Mon, 2005-04-11 at 22:06 -0400, Adam Jackson wrote: 
> On Monday 11 April 2005 20:56, David Reveman wrote:
> > On Mon, 2005-04-11 at 13:18 -0400, Adam Jackson wrote:
> > > I'm not quite clear yet on how you decide whether to use the software or
> > > hardware paths.  Is it per context?  Per client?  Per drawable?
> >
> > I think it would have to be per context.
> >
> > The temporary solution I'm using right now is to make this decision when
> > the the client creates it's first buffer, if we can accelerate drawing
> > to the buffer a native context is used, otherwise a software context is
> > used. This is not a solid solution and it can't be used in the future.
> >
> > The problem is that with Composite extension present a drawable can at
> > any time be redirected to a pixmap. So what do we do if the native GL
> > stack can't handle this? With framebuffer objects available we can
> > probably always allocate another texture and redirect drawing to it, the
> > native GL stack will handle software fall-back if necessary. What do we
> > do when framebuffer objects are not available?
> >
> > 1. Don't support GLX at all? I think this would be a major drawback.
> >
> > 2. Use software GL. And possibly use native GL for root window as it
> > can't be redirected and it would make a compositing manager run
> > accelerated. This is what I hoped we could get working.
> >
> > 3. Move native context to software when a window is redirected. Seems
> > like a really bad idea to me. Don't think we could ever get this working
> > properly.
> 
> I don't think we should be worrying too much about the case of not having 
> fbo's available.  I'd rather just have those drivers not be strictly 
> conformant until they get fbo support added.  Yes, this means DRI needs a lot 
> of work.  I'm willing to accept #2 for this case, 3 would be almost as hard 
> to get right as just adding fbo's.
> 
> > > I think you'll have major issues trying to use two rendering engines at
> > > once.
> >
> > That's bad as I think that not getting this working will mean that we
> > have to go with option 1 from above.
> >
> > I've had no trouble with using both GLcore and nvidia's GL stack in Xgl
> > so far... I think it could be worth investigate the possibilities for
> > getting this working with all GL stacks. Isn't there anyone with some
> > experience in this, seems like something someone would have tried
> > before...
> 
> It'll work fine as long as they never need to share state.  But if as you said 
> all contexts are shared, and you try to use a read buffer from one engine and 
> a draw buffer from another, you lose.  Your only option there is to push the 
> state to both engines, all the time.

Oh, a software context and native context never needs to be shared.
GLcore works just fine separately. All regular X drawing requests are
done using the native stack, if software fall-back is required, that's
always done by fb. Only the native contexts need to be shared. Except
for when a software context likes to bind a drawable to a texture id,
but that would be very rare and we can probably work around that with
some special hook in GLcore if we even want to support it.

So I should be able to get things working the way I planned then, right?
Sorry for the confusion. As soon as I get my code into CVS I think it'll
be much easier to see how this is supposed to work.

> 
> Whereas if you only have one GL engine that's capable enough for what you want 
> to do, then all you're doing is fixing the libglx interface.
> 
> > If we can get this working, GLX visuals that always use software could
> > also be available. I think that can be useful as well.
> 
> That's a neat idea.  Do we need to add some fields to the fbconfigs to expose 
> this, maybe a new token for GLX_CAVEAT ?

Yes, something like that would be useful. We don't want clients to do:
glXCreateContext, XCreateWindow, glXMakeCurrent, glGetString just to
check if they're using a software or native renderer.

> 
> > > > All contexts need to be shared inside Xgl so we're going to have to
> > > > keep hash tables in Xgl to deal with GLX contexts.
> > >
> > > Is this an artifact of using glitz, or is this something we'd see with
> > > other backends too?
> >
> > As long as we're using textures for drawables all contexts will have to
> > be shared. We need to be able to render to a drawable using both the
> > client specific context and using Xgl's core drawing context used for
> > regular X11 drawing requests.
> 
> This seems to be ignoring the idea that there might be direct-rendering 
> clients involved.  If every GL application talking to this X server is 
> indirect, fine, which even makes sense when Xgl is nested like it is now.
> 
> There's an alternative approach here.  What you said is true, given GL stacks 
> as they exist now contexts have to be shared to share drawables.  It may be 
> possible to write an extension such that GL doesn't have this limitation.  If 
> we're planning to support direct-rendering clients I think we'll have to.

I'm aware of this. If such an extension allows us to not use shared
contexts that's just fine, we can easily switch to not using shared
contexts when this is possible as it's only used for textures access
right now. Back to the original question. Do we need to keep hash tables
in Xgl for textures, display lists... ? We probably need this for
textures anyway to deal with GLX_ARB_render_texture like functionality.
To be able to use scissor boxes for pixel ownership test, we might also
need some additional info from display lists so we'll have to wrap
those... so I think we need hash tables in Xgl anyway. But I'll keep in
mind that contexts will not necessarily be shared in the future. 

-David




More information about the xorg mailing list