[PATCH] RFCv2: video support for dri2 (Rob Clark)
Christian König
deathsimple at vodafone.de
Fri Sep 2 03:04:47 PDT 2011
Hi Rob,
> + flipping between multiple back buffers, perhaps not in order (to
> handle video formats with B-frames)
Oh, yes please. The closed source drivers seems to do this also all the
time, and I never really understood why DRI is limiting the buffers to
the OGL attachment points.
...
> Current solutions use the GPU to do a scaled/colorconvert into a DRI2
> buffer from the client context. The goal of this protocol change is
> to push the decision to use overlay or GPU blit to the xorg driver.
You left one corner case out, HDMI allows for the framebuffer to be in
YUV 4:4:4 format. So it is possible to send YUV data to the display
(usually a TV) without color conversion at all, but I think with the
current state of X we are years away from that.
...
> In many cases, an overlay will avoid several passes through memory
> (blit/scale/colorconvert to DRI back-buffer on client side, blit to
> front and fake-front, and then whatever compositing is done by the
> window manager). On the other hand, overlays can often be handled
> directly by the scanout engine in the display hardware, with the GPU
> switched off.
Actually AMD has thrown out the hardware support for overlay with the
R7xx (or was it evergreen?) series, because they got support for turning
shader pipes off separately and figured out that it use less power to
turn off all shaders except one and then use this one for color
conversion and scaling, compared to having a distinct hardware block
doing the job. But there are tendencies to get a distinct color
conversion block back again.
...
> Note: video is not exactly the same as 3d, there are a number of other
> things to consider (scaling, colorconvert, multi-planar formats). But
> on the other hand the principle is similar (direct rendering from hw
> video codecs). And a lot infrastructure of connection, authentication,
> is same. So there are two options, either extend DRI2 or add a new
> protocol which duplicates some parts. I'd like to consider extending
> DRI2 first, but if people think the requirements for video are too
> much different from 3d, then I could split this into a new protocol.
If you ask me extending things seems the better way to do this.
..
> @@ -184,6 +185,11 @@ DRI2ATTACHMENT { DRI2BufferFrontLeft
> These values describe various attachment points for DRI2
> buffers.
>
> + In the case of video driver (DRI2DriverXV) the attachment,
> + other than DRI2BufferFrontLeft, just indicates buffer
> + number and has no other special significance. There is no
> + automatic maintenance of DRI2BufferFakeFrontLeft.
I think that will created compatibility problems with existing
implementations, because the DDX side doesn't know if it's talking to a
video or 3D client side.
...
> + The "XV_OSD" attribute specifies the XID of a pixmap containing
> + ARGB data to be non-destructively overlayed over the video. This
> + could be used to implement subtiles, on-screen-menus, etc.
Why an XID? I'm not 100% sure about it, but using a DRI buffer name
directly here seems to be the better alternative.
> > Are you targeting/limiting this to a particular API (or the customary
> > limitations of overlay HW)? I ask because VDPAU allows clients to pass
> > in an arbitrary colour conversion matrix rather than color
> > standard/hue/sat/bri/con, so it wouldn't be possible to use this in
> > that context.
>
> Ideally it would something that could be used either from
> device-dependent VDPAU or VAAPI driver back-end, or something that
> could be used in a generic way, for example GStreamer sink element
> that could be used with software codecs.
AFAIK DRI mostly isn't a device driver dependent protocol, and the
client side doesn't necessary know to which hardware it is talking, just
look at how gallium 3D is working, talking with X over the DRI protocol
is part of the driver independent state tracker, and NOT part of the
driver itself.
So having this driver independent is just a must have, and not optional.
> Well, this is the goal anyways. There is one slight other
> complication for use in a generic way.. it would need to be a bit
> better defined what the buffer 'name' is, so that the client side
> would know how to interpret it, mmap it if needed. But I think there
> is a solution brewing:
> http://lists.linaro.org/pipermail/linaro-mm-sig/2011-August/000509.html
That's indeed true, but currently it is the only parameter that must be
interpreted in a driver dependent fashion. And I really think it should
stay that way.
> As far as color conversion matrix... well, the attribute system can
> have arbitrary device-dependent attributes. In the VDPAU case, I
> suppose the implementation on the client side knows which xorg driver
> it is talking to, and could introduce it's own attributes. Perhaps a
> bit awkward for communicating a matrix, but you could in theory have
> 4*3 different attributes (ie. XV_M00, XV_M01, ... XV_M23) for each
> entry in the matrix.
Yes, but you should define that clearly in the protocol, we also need
something to make the client side know what is supported: individual
values or matrix or both?
> > Also in general, their compositing API is a lot more
> > flexible and allows for a background + multiple layers, rather than
> > just a single layer. I suppose you could pre-flatten the layers into a
> > single one, but the background would be problematic.
>
> Yeah, pre-flatten into a single layer, I think. I mean, we *could*
> push that to xorg driver side too, but I was trying to not make things
> overly complicated in the protocol.
>
> I'm not sure I caught the issue about background.. or are you
> thinking about video w/ AYUV? Is there any hw out there that supports
> overlays w/ YUV that has an alpha channel? If this is enough of a
> weird edge case, maybe it is ok to fall back in these cases to the old
> way of doing the blending on the client side and just looking like a
> 3d app to the xorg side. (I suspect in this sort of case you'd end up
> falling back to the GPU on the xorg side otherwise.) But I'm always
> interested to hear any other suggestions.
VDPAU indeed defines two YUVA formats you can use, but currently I
haven't seen anybody using the background picture functionality so far,
because there are just not so much YUVA videos around.
Christian.
More information about the xorg-devel
mailing list