Proprosed break in libGL / DRI driver ABI

Brian Paul brian.paul at tungstengraphics.com
Tue Apr 5 18:13:33 PDT 2005


Adam Jackson wrote:
> On Tuesday 05 April 2005 16:11, Brian Paul wrote:
> 
>>Roland Mainz wrote:
>>
>>>Another item would be to look into what's required to support visuals
>>>beyond 24bit RGB (like 30bit TrueColor visuals) ... someone on IRC
>>>(AFAIK ajax (if I don't mix-up the nicks again :)) said that this may
>>>require an ABI change, too...
>>
>>I doubt an ABI change would be needed for that.
> 
> 
> Are you sure about this?

Yup, pretty sure.  An ABI change at the libGL / driver interface isn't 
needed.  I don't know of any place in that interface where 8-bit color 
is an issue.  Please let me know if I'm wrong.


> I thought we treated channels as bytes everywhere, unless GLchan was defined 
> to something bigger, and even then only for OSMesa.  Even if it's not an ABI 
> change, I suspect that growing GLchan beyond 8 bits while still preserving 
> performance is non-trivial.

This is separate from Ian's ABI discussion.  It's true that core Mesa 
has to be recompiled to support 8, 16 or 32-bit color channels. 
That's something I'd like to change in the future.  It will be a lot 
of work but it can be done.

Currently, there aren't any hardware drivers that support > 8-bit 
color channels.  If we did want to support deeper channels in a 
hardware driver we'd have a lot of work to do in any case.  One 
approach would be to compile core Mesa for 16-bit channels, then 
shift/drop bits in the driver whenever we write to a color buffer.  Of 
course, there's more to it than that, but it would be feasible.

As part of the GL_ARB_framebuffer_object work I'm doing, simultaneous 
support for various channel sizes will be more do-able.


>>>When I look at xc/extras/Mesa/src/mesa/main/config.h I see more items on
>>>my wishlist: Would it be possible to increase |MAX_WIDTH| and
>>>|MAX_HEIGHT| (and the matching texture limits of the software
>>>rasterizer) to 8192 to support larger displays (DMX, Xinerama and Xprint
>>>come in mind) ?
>>
>>If you increase MAX_WIDTH/HEIGHT too far, you'll start to see
>>interpolation errors in triangle rasterization (the software
>>routines).  The full explanation is long, but basically there needs to
>>be enough fractional bits in the GLfixed datatype to accomodate
>>interpolation across the full viewport width/height.
>>
>>In fact, I'm not sure that we've already gone too far by setting
>>MAX_WIDTH/HEIGHT to 4096 while the GLfixed type only has 11 fractional
>>bits.  I haven't heard any reports of bad triangles so far though.
>>But there probably aren't too many people generating 4Kx4K images.
> 
> 
> Yet.  Big images are becoming a reality.  DMX+glxproxy brings this real close 
> to home.

I fully agree that there's need to render larger images.


>>Before increasing MAX_WIDTH/HEIGHT, someone should do an analysis of
>>the interpolation issues to see what side-effects might pop up.
> 
> 
> Definitely.
> 
> 
>>Finally, Mesa has a number of scratch arrays that get dimensioned to
>>[MAX_WIDTH].  Some of those arrays/structs are rather large already.
> 
> 
> I looked into allocating these dynamically, but there were one or two sticky 
> points (mostly related to making scope act the same) so I dropped it.  It 
> could be done though.

A lot of these allocations are on the stack.  Changing them to heap 
allocations might cause some loss of performance too.

-Brian


More information about the xorg-arch mailing list