Xgl server

David Reveman c99drn at cs.umu.se
Sat Dec 4 05:28:05 PST 2004


On Fri, 2004-12-03 at 19:37 +0100, Matthias Hopf wrote: 
> Hi,
> 
> haven't had enough time to look into this until right now...
> 
> On Nov 05, 04 21:54:55 +0100, David Reveman wrote:
> > On Fri, 2004-11-05 at 11:11 -0700, Brian Paul wrote:
> > > David Reveman wrote:
> > > > I've been doing some work to get an X server running on top of
> > > > OpenGL/glitz and I've got something that works pretty well now. I stuck
> > > > the code into the "xserver" tree and './configure --enable-xglserver'
> > > > should compile the common xgl code along with an Xglx server that can
> > > > run on top of an existing X server with GLX.
> 
> Thanks a lot!
> I guess it was easier to integrate this into KAA compared to XAA, and
> thus you chose kdrive? Or are there any other reasons?

Xgl is a new acceleration architecture, neither based on kdrive/KAA or
XAA. However, the code exists in the same tree as the kdrive-based X
servers right now.

> 
> > > > With a good OpenGL driver and some luck, you should be able to run most
> > > > applications on it. However, you should know that the Xglx server is
> > > > really simple, there's no real cursor and compared to the Xnest server,
> > > > Xglx must always run on top of all other windows as the back buffer is
> > > > used for pixmap memory.
> 
> What about using PBuffers or the (yet-to-be-specified) RenderToTexture?
> Any principle problems, or just the usual lack of time? You mentioned it
> below...

Pbuffers are used when available. Current render-to-texture frameworks
are not working very well, but it seems like super buffers will fit our
needs perfectly.

> 
> > > > A lot of operations are accelerated, some operations can be accelerated
> > > > better, some operations are not accelerated but can be accelerated and
> > > > some operations can never be accelerated.
> 
> I'm always thinking of wide lines, which are exactly specified in X, but
> not in OpenGL. Until we have a change in semantics (that is, we make
> X12) this type of primitve will always have to be renered in software
> :-(

Do we need a wide line primitive? Isn't the RENDER extensions trapezoids
enough?

> 
> > > > * Get the fb layer to operate on pixel data with scan-line order
> > > > bottom-to-top. This will make software fall-backs A LOT faster. As
> > > > keithp told me, all we really need to do is to use negative strides and
> > > > I'll give that a try within in a few days, hopefully that will work just
> > > > fine.
> > > 
> > > Why do you need bottom-to-top raster order?  I assume it has something 
> > > to do with OpenGL's conventions.
> > 
> > I'm glad you asked. Normally when using OpenGL you would just load all
> > texture data as it's stored by application and if this happens to be in
> > top-to-bottom raster order you would just take care of that when
> > assigning texture coordinates or when setting the texture matrix.
> 
> And how about setting up a frustum down the positive z axis, or with a
> left handed coordinate system? Shouldn't be too problematic, done this
> several times for rendering 2D data in OpenGL.
> 
> Ok, thinking about it, as you need to do
> ReadPixels/DrawPixels/CopyTexImage2D a lot, this might be problematic.
> One can set up PixelStore parameters so that effectively these calls
> just do a plain copy even with top-to-bottom raster order, but I have to
> admit I haven't tried negative GL_PACK_ROW_LENGTH yet. I don't know
> whether these cases are optimized in many OpenGL implementations at
> all...

We could use PixelZoom, but it's too slow.
Negative GL_PACK_ROW_LENGTH doesn't work, according to the OpenGL spec
GL_PACK_ROW_LENGTH must be 0 or greater.

> 
> > With glitz, in general, and especially with Xgl, a lot of copying from
> > framebuffer/pbuffers to textures is done. If textures contain pixel data
> > stored in top-to-bottom order this cannot (as far as I know) be done
> > efficiently and it's a lot more important that this framebuffer/pbuffer
> > to texture copy is efficient than the user memory to texture copy.
> 
> Are textures stored inside the gfx card typically top-to-bottom or
> bottom-to-top? That is, is (0,0) in texture coordinates corresponding to
> the first line in memory or to the last line?
> 
> Actually, we should align screen layout to texture layout, as we cannot
> change the latter (at least not on the OpenGL level).
> 
> > This is why glitz stores texture data in bottom-to-top raster order. If
> > someone knows a better way around this problem please let me know.
> 
> :] Don't know whether these thoughts help, but I'll try...
> 
> > > > * Replace or improve the current xgl offscreen memory manager, it wastes
> > > > huge amounts of memory right now.
> > > > * Improve xrender text performance.
> 
> I always wondered why the render extension does effectively a lot of
> things in software (composing several letters together) and sending
> these down to the graphics hardware each and every time instead of
> uploading fonts (or better: part of fonts) into a texture and use the
> graphics hardware for accessing these data?
> 
> With vertex/pixel shader and point sprites we could come down to as much
> as 6 bytes per character to be sent down to the gfx card (2 bytes for x,
> y and character code) in the average case and 10-12 bytes for the
> general case (4 bytes for x, y, 2 bytes character tex coords x, y,
> 2 or 4 bytes character size). Plus some code for texture switching in
> the case of really nasty fonts with thousands of characters that cannot
> be fit into a single texture.
> 
> If there are no pixel shader + point sprites, the number of vertices
> increases (to render quads), but this should still be faster.
> 

Yes, I'd like to stick all glyphs of an anti-aliased font into a single
texture for maximum speed. For bitmap fonts, I'd like try and convert
each glyph into a region and stick that geometry into a static VBO, one
VBO per font. 

> 
> As all of this would be heavily OpenGL based, I would only start to try
> something like this if we agree that this could indeed boost performance
> even more. It certainly would require changes to the acceleration
> architecture.
> 
> > > > * Accelerate bitmap text. Performance of applications2 that use bitmap
> > > > text is terrible right now. :(
> 
> Right, right. This could benefit from pixel shader even more.
> 
> > > > * Hook up glitz's convolution filters. This should be really easy, all
> > > > we need is a software fall-back.
> 
> Hm. No use to do anything with the Imaging Pipeline, this has to be done
> with pixel shader. Been there, done that, only accelerated on sgi, and
> my personal guess is the Imaging Pipeline will never be accelerated on
> PC style hardware.

glitz is already able to do convolution filters very efficiently using
pixel shaders. It shouldn't be more than a few lines of code that needs
to be added to Xgl to have it enabled. But as not all hardware support
pixel shaders, we need a software fall-back in the fb layer before we
can enable this. 

glitz will never handle software fall-backs, glitz will only report if
an operation was successful or not, and it's then up to the layer above
glitz to handle fall-backs.

> 
> > > > * Add Xagl and Xwgl servers.
> 
> :confused: Even google couldn't answer me...
> You mean Xgl running the WGL extension? Then what's AGL?
> 
> > > > Have fun playing with it. If you find any bugs (and you will), please
> > > > let me know.
> 
> I'll do ;)
> 
> > > Nice job!
> 
> Agreed.
> 

Thanks!

-David




More information about the xorg mailing list