[PATCH dri3proto v2] Add modifier/multi-plane requests, bump to v1.1
Daniel Stone
daniel at fooishbar.org
Fri Jul 28 07:09:35 UTC 2017
Hi,
Sorry, I've been sick the past couple of days - exactly when this
thread exploded ...
On 25 July 2017 at 22:20, Eric Anholt <eric at anholt.net> wrote:
> Daniel Stone <daniels at collabora.com> writes:
>> DRI3 version 1.1 adds support for explicit format modifiers, including
>> multi-planar buffers.
>
> I still want proper 64-bit values, and I don't think the little XSync
> mess will be much of a blocker.
Cool, I'll happily review the CARD64 bits and flick the switch on the
protocol when those land.
>> +┌───
>> + DRI3GetSupportedModifiers
>> + window: WINDOW
>> + format: CARD32
>> + ▶
>> + num_modifiers: CARD32
>> + modifiers: ListOfCARD32
>> +└───
>> + Errors: Window, Match
>> +
>> + For the Screen associated with 'window', return a list of
>> + supported DRM FourCC modifiers, as defined in drm_fourcc.h,
>> + supported as formats for DRI3 pixmap/buffer interchange.
>> + Each modifier is returned as returned as a CARD32
>> + containing the most significant 32 bits, followed by a
>> + CARD32 containing the least significant 32 bits. The hi/lo
>> + pattern repeats 'num_modifiers' times, thus there are
>> + '2 * num_modifiers' CARD32 elements returned.
>
> Should any meaning be assumed from the ordering of modifiers?
Nope, arbitrary order. In practice, the client does its own sort
across the list anyway, selecting for local optimality.
>> + Precisely how any additional information about the buffer is
>> + shared is outside the scope of this extension.
>
> Should we be specifying how the depth of the Pixmap is determined from
> the fourcc? Should we be specifying if X11 rendering works on various
> fourccs, and between pixmaps of different fourccs? It's not clear to me
> what glamor would need to be able to do with these pixmaps (can I
> CopyArea between XRGB888 and BGRX8888? What does that even mean?)
I'll come back to that in the subthread.
>> +┌───
>> + DRI3FenceFromDMAFenceFD
>> + drawable: DRAWABLE
>> + fence: FENCE
>> + fd: FD
>> +└───
>> + Errors: IDChoice, Drawable
>> +
>> + Creates a Sync extension Fence that provides the regular Sync
>> + extension semantics. The Fence will begin untriggered, and
>> + become triggered when the underlying dma-fence FD signals.
>> + The resulting Sync Fence is a one-shot, and may not be
>> + manually triggered, reset, or reused until it is destroyed.
>> + Details about the mechanism used with this file descriptor are
>> + outside the scope of the DRI3 extension.
>
> I was surprised to find this lumped in with a commit about
> multi-planar/buffer support -- is it actually related, and is it used?
Related, no. Used, not right now, but there'll be patches out to
implement explicit fencing for Vulkan clients next week. It's only
lumped in to save doing two version bumps at the exact same time.
> Must an implementation supporting 1.1 support this? dma-fences seem
> like a pretty recent kernel feature.
You're right, a capability query would be better here.
>> +┌───
>> + DRI3DMAFenceFDFromFence
>> + drawable: DRAWABLE
>> + fence: FENCE
>> + ▶
>> + fd: FD
>> +└───
>> + Errors: IDChoice, Drawable, Match
>> +
>> + Given a Sync extension Fence originally created by the
>> + DRI3FenceFromDMAFenceFD request, return the underlying
>> + dma-fence FD to the client. Details about the mechanism used
>> + with this file descriptor are outside the scope of the DRI3
>> + extension. 'drawable' must be associated with a direct
>> + rendering device that 'fence' can work with, otherwise a Match
>> + error results. NB: it is quite likely this will be forever
>> + unused, and may be removed in later revisions.
>> +
>
> Let's not introduce protocol if we can't come up with a use for it.
Happily, it is actually used, after a bit of back-and-forth on the
implementation. lfrb is at SIGGRAPH this week, but he has some
branches which work but are in need of cleanup here:
https://git.collabora.com/cgit/user/lfrb/xserver.git/log/?h=x11-fences
https://git.collabora.com/cgit/user/lfrb/mesa.git/log/?h=wip/2017-07/vulkan-fences
The idea is that when a VkSemaphore is passed into vkQueuePresentKHR,
the implementation extracts a dma-fence from the semaphore, creates an
X11 fence directly wrapping that, and passes that in as the wait_fence
to PresentPixmap. The server then inserts a hardware-side wait (either
EGL_ANDROID_native_fence_fd + eglWaitSyncKHR for Glamor, or
IN_FENCE_FD when directly flipping with KMS). On the converse side,
out-fences are implemented by creating an 'empty' DMAFence object,
passing that as the idle_fence to PresentPixmap, calling
DMAFenceFDFromFence when the PresentIdlePixmap event comes through,
then wrapping that into a VkSemaphore/VkFence returned via
vkAcquireNextImageKHR.
We'll do some cleanup across the branch - and this protocol text -
before sending it out though.
Cheers,
Daniel
More information about the xorg-devel
mailing list