CVS lock ?

Vladimir Dergachev volodya at mindspring.com
Fri Dec 17 08:21:08 PST 2004



On Fri, 17 Dec 2004, Michel [ISO-8859-1] Dänzer wrote:

> On Fri, 2004-12-17 at 00:32 -0500, Vladimir Dergachev wrote:
>>
>> On Thu, 16 Dec 2004, Michel [ISO-8859-1] Dnzer wrote:
>>
>>>> +     * However, I don't feel confident enough with the control flow
>>>> +     * inside the X server to implement either fix. -- nh
>>>
>>> RADEONEnterServer() gets called whenever the X server grabs the hardware
>>> lock after a 3D client held it.
>>
>> Thank you, I was not aware of this :)
>
> That's what I mean by 'no prior discussion'.
>
>> Btw, r300_demo is not a glx app so it can run simultaneously with X,
>> especially on SMP machines - which is why I decided not to explore how to
>> do this in X.
>
> I agree that such a (root only I presume?) hack is hardly important to
> the X server code, but my conclusion would be to reduce the clutter of
> hacks in the X code instead of increasing it even more...
>
>> Once R300 Mesa 3d driver is closer to usable state this should no longer
>> be necessary - instead the DRM driver would do cache flushes as
>> appropriate.
>
> Actually, I think the X server has to do it with the current
> driverSwapMethod the radeon driver uses.
>

The thing is I was replying late at night and did not explain thoroughly 
enough.

Calling DO_CP_IDLE is a hack no matter where you put it - the right way to 
do things is to do a proper cache flush (plus whatever magic is required)
each time 3d activity is followed by 2d one.

The reason I suggested DRM as a likely candidate is that it is supposed 
to validate the packets it gets from userspace - and thus would be in
perfect position to know what exactly happens to the card.

However, we are not there yet, which is why DO_CP_IDLE code is 
appropriate.

                   best

                     Vladimir Dergachev

>
> -- 
> Earthling Michel Dänzer      |     Debian (powerpc), X and DRI developer
> Libre software enthusiast    |   http://svcs.affero.net/rm.php?r=daenzer
>


More information about the xorg mailing list