Backing out DRI2 from server 1.5
Dave Airlie
airlied at gmail.com
Sun Aug 10 14:07:18 PDT 2008
On Mon, Aug 11, 2008 at 5:46 AM, Kristian Høgsberg <krh at bitplanet.net> wrote:
> On Sat, Aug 9, 2008 at 7:28 PM, Dave Airlie <airlied at gmail.com> wrote:
>> On Wed, Aug 6, 2008 at 2:23 AM, Kristian Høgsberg <krh at bitplanet.net> wrote:
>>> On Tue, Aug 5, 2008 at 10:42 AM, Michel Dänzer
>>> <michel at tungstengraphics.com> wrote:
>>>>
>>>> Hi Kristian,
>>>>
>>>> On Tue, 2008-08-05 at 21:01 +1000, Dave Airlie wrote:
>>>>> On Tue, Aug 5, 2008 at 5:39 PM, Alan Hourihane <alanh at fairlite.co.uk> wrote:
>>>>> > On Mon, 2008-08-04 at 17:20 -0400, Kristian Høgsberg wrote:
>>>>> >>
>>>>> >> Since it looks like we'll be going with GEM for the memory manager,
>>>>> >> I'll have to revisit some of the DRI2 design decisions. As a first
>>>>> >> step, I want to back out the DRI2 stuff from the 1.5 X server
>>>>> >> entirely, since it uses TTM API for creating and mapping the DRI2
>>>>> >> sarea. We're in feature freeze with 1.5 and I won't be able to update
>>>>> >> it in time anyway, so the best option is to just back it out instead
>>>>> >> of releasing a server with modules expecting an API that was never
>>>>> >> released.
>>>>> >
>>>>> > GEM is currently Intel specific.
>>>>> >
>>>>> > It seems as though the memory managers are going to be driver specific
>>>>> > at this time, so we can't have the Xserver relying on a specific one.
>>>>> >
>>>>> > Maybe we should have some callbacks to the driver for DRI2 specific
>>>>> > handling ?
>>>>> >
>>>>>
>>>>> I think that is the current plan, the shared area for DRI2 will be an
>>>>> shm object independent of memory manager.
>>>>>
>>>>> everything else will be 32-bit handles.
>>>>
>>>> Has any of this been done yet anywhere? I need memory manager agnostic
>>>> DRI2 for a project I'm working on, so I thought we should at least
>>>> exchange ideas for the direction to take.
>>>
>>> I've just started to look into this again, and while the main change I
>>> want to do is to make it memory manager agnostic, there's a couple of
>>> other things I'd like to change at this point:
>>>
>>> 1) with DRI2. I kept the buffer swap in the client since I didn't
>>> want to incur a server request to do this. This decision meant that
>>> we had to keep much of the complexity for synchronizing clip rects
>>> between server and DRI clients in place. What I realized in the mean
>>> time is that we always send a few requests to post damage after each
>>> swap buffer, so introducing a DRI2 request to do swap buffers and post
>>> damage shouldn't affect performance but will make everything much
>>> simpler. This will also eliminate the need for the DRI lock, which
>>> for DRI2 was only used to synchronize access to cliprects.
>>>
>>> 2) Now that we don't need to communicate cliprects to the DRI
>>> clients, the somewhat complex DRI2 sarea and event buffer becomes a
>>> little harder to justify as we only use this to detect changes in
>>> attached buffers. George's swrast DRI driver uses a simpler approach
>>> there: he hooks the dd_function_table::Viewport function and asks the
>>> loader for the drawable size. I'd like to do something similar for
>>> DRI2, which will completely eliminate the need for the sarea. The
>>> DRI2 DRI driver will ask the loader (libGL, which will forward the
>>> query over protocol or AIGLX, which will ask the DRI2 module directly)
>>> for the dimensions and memory manager buffers backing the current
>>> drawable. This costs a roundtrip, but this was part of the old design
>>> too and inherent in GLX, in that multiple DRI clients need to agree on
>>> the memory manager buffers backing the aux renderbuffers. Thus you
>>> need to go to the X server one way or the other.
>>>
>>> 3) Let the DDX driver allocate the auxillary buffers. I went back and
>>> forth on this a bit and in some sense it's an arbitrary decision: both
>>> the DDX and the DRI drivers know enough about the hardware to allocate
>>> buffers with the right stride/tile/etc properties. Doing it in the
>>> DDX means that the DRI driver need to tell the DDX driver what buffers
>>> to allocate (using the DRI2CreateWindow), but on the other hand it
>>> avoids tricky allocation races with multiple DRI clients rendering to
>>> the same drawable. And without the sarea, doing it in the client
>>> would incur an extra round trip: you would first have to ask the
>>> server about the drawable size, then allocate and tell the server
>>> about the buffers you allocated. This lets the DDX driver implement
>>> special cases such as allocating a full screen back buffer that has
>>> the right properties to be used as a scan out buffer for page flip
>>> cases. Which in turn becomes a lot simpler when the buffer flip
>>> happens in the X server. And for redirected windows, the back buffer
>>> can be another pixmap so that buffer flips can be implemented as
>>> setting a different window pixmap.
>>>
>>> This all sounds like a lot of work, but it's mostly simplifications
>>> and I expect to make some good progress towards it this week. In the
>>> mean time I'll drop the dri2 bits from the xserver 1.5 and mesa 7.1
>>> branches.
>>
>> How are we communicating information like tiling properties of buffers
>> between DDX and DRI clients now?
>>
>> These are driver specific properties and usually the sarea was used.
>>
>> For example radeon can change the tiling property of the front buffer
>> if the mode changes interlaced to non-interlaced.
>
> My plan is to included a device specific 32 bit bitfield per buffer in
> the reply to DRI2Getbuffers, which is what the client calls to ask the
> server for buffer info. These bits can indicate properties such as
> tiling. In the DRI2Connect call, I'm sending back the DDX version, so
> the DRI driver will know which bits are valid.
>
I actually meant to cc the lists on this, I dislike device specific
limited allocation bitfields
as an API in general, can we add something with a pointer or length + array?
Dave.
> The front buffer isn't touched directly by DRI clients with DRI2.
> Even single buffered rendering or when a client explicitly sets the
> front buffer as draw buffer will render to an off screen buffer, which
> we copy to the front buffer in the X server.
>
> cheers,
> Kristian
>
More information about the xorg
mailing list