DRI2 Protocol Spec Draft

Kristian Høgsberg krh at bitplanet.net
Wed Sep 10 11:09:49 PDT 2008

On Tue, Sep 9, 2008 at 11:46 AM, Keith Packard <keithp at keithp.com> wrote:
> On Tue, 2008-09-09 at 13:27 +0200, Michel Dänzer wrote:
>> I think it'd be good if the authentication stuff could be made/kept
>> optional or at least not DRM specific. (I'm not sure GEM or the DRM in
>> general is within scope of this spec at all)
> I have to admit that I'm not very excited by the existing authentication
> protocol.
> What we want is a way to let applications identify themselves with the
> kernel and 'prove' that they have permission to access kernel objects.

That's what the current XF86DRI scheme is and it's what I've copied.
I've updated the spec to drop the group concept, and instead just
require that the server authenticates the client into the group it

> It seems like having the X server return a 'cookie' that the client
> could use with the kernel module might make things simpler:
> ┌───
>    DRI2Connect
>        window: WINDOW
>        type: STRING
>>        driver: STRING
>        device: STRING  -- device file name
>        auth-data: LISTofCARD8
> └───
>        'auth-data' is an arbitrary array of bytes which the client
>        passes to the direct rendering system to validate the client's
>        access of direct rendering objects created by the X server.
> It seems like this offers precisely the right guarantee -- the client
> proves to the kernel that it is connected to the X server and thus
> should be granted permission to access the X server objects. Under some
> tighter access control mechanisms, the 'auth-data' could even be
> generated per-client so that the client would only have access to a
> sub-set of X server objects.

This is basically what we have, except the other way around.  You're
proposing that the server generates a random cookie, informs the DRM
that if anybody presents that cookie, they're authenticated and the
sends it to the client.  The client then receives the cookie and
introduces itself to the DRM.  What XF86DRI does and what I'm doing
for DRI2 is to have the client get a cookie from the DRM that
represents the client and then pass it to the server which then asks
the DRM to authenticate it.  Everybody can talk to the DRM and create
a token, but only if you can pass it to the server over DRI2 protocol,
can you authenticate.

I'd say the two schemes are pretty much equivalent in complexity and
in what options we have for narrowing down access per client as you
suggest.  Pros and cons of the two schemes as I see it is that your
scheme eliminates the DRI2Authenticate request from the protocol, but
requires a random cookie to be generated, which is a little icky...
how many bits etc?  The old scheme is well established and the extra
request isn't really a concern - it's async.

As for Michels concern, sure authentication can be optional, if your
video memory manager (trying hard not to say DRM or GEM here) doesn't
enforce access restriction.  For the Xorg DRI2 and Linux DRM
implementations, I do intend to implement access control so we can run
multiple X servers without one being able to read out the contents of
the others framebuffer.

>> For DRI2CopyRegion, you're leaving it to the DDX driver to pick the CRTC
>> to synchronize to? I'm not sure that'll work too well with overlapping
>> viewports, where the user may want to choose which CRTC to synchronize
>> to for each application.
> Yeah, I don't see a good way to avoid this, and the client can always
> pass in 'Automatic' (0) and let the server pick the 'right' one.

Do we need this?  When will the client have a better idea of which
pipe a window is on than the X server?

>>  This request also still seems to be missing
>> return values for the sequence number when the copy is expected to take
>> place and tokens for synchronization of direct rendering to the
>> source/destination buffer.
> Eliminating the reply avoids a round trip, so I'm in favor of not
> providing any if it's not strictly necessary.
> I don't know if the GL api requires us to provide the expected sequence
> number back to the application.
> For synchronization, we should expect the kernel module to perform this
> automatically -- once the X server has processed this request, the
> kernel can pend further rendering to the source buffer until the copy
> has finished. That would, of course, require that the application know
> that the kernel has received the copy command from the X server -- so
> the client would need to get something from X server indicating that it
> had finished processing the Copy request. The easiest thing to use would
> be a reply, but we'd structure the library so that the client wouldn't
> pend on the reply and could block just before touching the back buffer
> again.

Yes, there needs to be a round trip after calling DRI2CopyRegion, to
make sure the server submits the copy commands to the DRM before the
client can continue rendering. That round trip is DRI2GetBuffers.  So
glXSwapBuffers() will be imlpemented as

  DRI2CopyRegion(drawable, region, BACK, FRONT...)
  buffers = DRI2GetBuffers(drawable);

where the DRI2GetBuffers() will give us the roundtrip we need and
return the new buffers, in case the server did a page flip

> Note that there isn't any synchronization on the real front buffer; that
> isn't a legal target for direct rendering.
>>  Oh, and I think it should take relative
>> sequence numbers as well as absolute ones.
> Yeah, GL does kinda require this.

So for DRI2CopyRegion flags, something like this:

    #define DRI2_VSYNC_DONT_CARE 0x0
    #define DRI2_VSYNC_ABSOLUTE 0x1
    #define DRI2_VSYNC_RELATIVE 0x2
  ( #define DRI2_VSYNC_RESERVED 0x3 )


    #define DRI2_PRESERVE_SOURCE 0x4



More information about the xorg mailing list