CTM: was Re: [radeonhd] Necessary for 3D

Jerome Glisse glisse at freedesktop.org
Mon Oct 8 09:23:32 PDT 2007


Syren Baran wrote:
> Am Montag, den 08.10.2007, 11:17 +0200 schrieb Nicolas Trangez:
>> On Mon, 2007-10-08 at 11:02 +0200, Syren Baran wrote:
>>> Waiting for 3D docs will result in an infinite loop (at least if you
>>> are
>>> looking for registers to write something like "turn this object by x
>>> degrees").
>>> The relevant document is
>>> http://ati.amd.com/companyinfo/researcher/documents/ATI_CTM_Guide.pdf
>>> .
>>> As far as i can tell by now, it contains all information necesarry to
>>> write an assembler. 
>> Doesn't this fit fairly nice in the LLVM-based Gallium effort?
> 
> Hmm, somehow i doubt it.
> The Sparc and x86 architecture actually appear very similiar compared to
> this beast.
> The R580 has (depending on modell) 48 processors each executing the same
> command on different memory locations (though some may be sleeping,
> depending on flow control).
> The instruction set is very different from architectures i know.
> A Sparcs RISC set is more or less a subset of x86 CISC set, but this ...
> hmm, i still consider it wierd, but maybe it just takes time getting
> used to.
> 
> LLVM docs state it can produce code for Sparc and x86 (and intermediate
> byte code) and mentions all kinds of optimisation strategies.
> I doubt these strategies were developed with such a processor in mind.
> The VM approach doesnt really make any sense, unless we are planning to
> write a VM for booth ATI and Nvidia GPU´s. And even then interpreting an
> intermediate code isn´t what a driver would want to do, due to
> performance penalties.
>> Nicolas
> 
> Syren
> 

I am convinced AMD & NVidia are using things similar to LLVM to optimize
shader before translating it to hw, so it's definitely way to go. As
side note i don't think anybody will accept security flow just to allow
few apps to play with the GPU. I believe we could in the future provide
with the help of gallium some nice interface for letting application
taking advantage of GPU without the need for the application to know
anythings about the card.

Cheers,
Jerome Glisse



More information about the xorg-driver-ati mailing list