Xgl page - http://www.freedesktop.org/Software/Xgl

Michel Dänzer michel at daenzer.net
Tue Mar 8 13:07:35 PST 2005


On Tue, 2005-03-01 at 00:39 +0100, David Reveman wrote:
> On Sun, 2005-02-27 at 22:45 -0500, Michel Dänzer wrote: 
> > On Mon, 2005-02-28 at 03:52 +0100, David Reveman wrote:
> > > On Mon, 2005-02-28 at 03:21 +0100, Christian Parpart wrote:
> > > > How's the state for nvidia users? Can they use Xgl somehow?
> > > 
> > > The only thing that exists right now is Xglx, which is a server running
> > > on top an already existing X server with GLX support. x86 nvidia users
> > > seem to be the ones that can run this best at the moment.
> > 
> > FWIW, it works pretty well for me with fglrx. You have me curious, is
> > there anything that works with nvidia but not fglrx?
> 
> Pbuffers, so no accelerated offscreen rendering with fglrx. Xglx runs
> pretty well anyway as the back buffer is used for offscreen drawing and
> that's enough for the compositing manager to run nicely, but normal
> windows and pixmaps are all rendered in software.
> 
> The fglrx driver do have some puffer support in it, but it contains
> memory leaks and doesn't seem to be very stable. I guess that's why the
> driver doesn't actually report pbuffer support. The memory leaks aren't
> that big of a problem for Xglx as all used pbuffer memory is allocated
> when starting the server and Xgl is then doing the memory management on
> its own. If you want to try the pbuffer support provided by the fglrx
> driver, you should be able to get glitz to detect pbuffer support by
> forcing glx_version to >=1.3 and GLX extensions 'fbconfig' and 'pbuffer'
> as detected in glx/glitz_glx_info.c. 

Ah, I was wondering why it wasn't using pbuffers... I'll look into why
it doesn't advertise these extensions (maybe glitz could still try to
use an extension if GetProcAddress succeeds for all needed functions?),
but I think there might be another problem in glitz as well: For direct
rendered contexts, shouldn't it use glXGetClientString(...,
GLX_EXTENSIONS) instead of glXQueryExtensionsString() (and analogously
for the GLX version)?


> Things like the number of texture indirections supported by fragment
> programs seem to be higher with nvidia drivers, but that's probably more
> of a hardware issue than driver thing. However, the number of texture
> indirections is important for convolution filter like fragment programs.
> With nvidia hardware I've been able to run 11x11 convolution filters but
> 3x3 is the best I've been able to do with ATI hardware. But you should
> know that I haven't done any tests on geforce 6x00 or ATI Xx00 cards.

Interesting, thanks.


> With some version of the ATI drivers, I found it seg fault when
> compiling my fragment programs. 

Haven't seen that, but I've only been using fglrx since version 3.14.


-- 
Earthling Michel Dänzer      |     Debian (powerpc), X and DRI developer
Libre software enthusiast    |   http://svcs.affero.net/rm.php?r=daenzer




More information about the xorg mailing list