[PATCH] RFCv2: video support for dri2
rob.clark at linaro.org
Thu Sep 1 16:21:42 PDT 2011
On Thu, Sep 1, 2011 at 5:22 PM, Younes Manton <younes.m at gmail.com> wrote:
> On Thu, Sep 1, 2011 at 4:52 PM, Rob Clark <rob at ti.com> wrote:
>> To allow the potential use of overlays to display video content, a few
>> extra parameters are required:
>> + source buffer in different format (for example, various YUV formats)
>> and size as compared to destination drawable
>> + multi-planar formats where discontiguous buffers are used for
>> different planes. For example, luma and chroma split across
>> multiple memory banks or with different tiled formats.
>> + flipping between multiple back buffers, perhaps not in order (to
>> handle video formats with B-frames)
>> + cropping during swap.. in case of video, perhaps the required hw
>> buffers are larger than the visible picture to account for codec
>> borders (for example, reference frames where a block/macroblock
>> moves past the edge of the visible picture, but back again in
>> subsequent frames).
>> Current solutions use the GPU to do a scaled/colorconvert into a DRI2
>> buffer from the client context. The goal of this protocol change is
>> to push the decision to use overlay or GPU blit to the xorg driver.
>> In many cases, an overlay will avoid several passes through memory
>> (blit/scale/colorconvert to DRI back-buffer on client side, blit to
>> front and fake-front, and then whatever compositing is done by the
>> window manager). On the other hand, overlays can often be handled
>> directly by the scanout engine in the display hardware, with the GPU
>> switched off.
>> The disadvantages of overlays are that they are (usually) a limited
>> resource, sometimes with scaling constraints, and certainly with
>> limitations about transformational effects.
>> The goal of combining video and dri2 is to have the best of both worlds,
>> to have the flexibility of GPU blitting (ie. no limited number of video
>> ports, no constraint about transformational effects), while still having
>> the power consumption benefits of overlays (reduced memory bandwidth
>> usage and ability to shut off the GPU) when the UI is relatively
>> static other than the playing video.
>> Note: video is not exactly the same as 3d, there are a number of other
>> things to consider (scaling, colorconvert, multi-planar formats). But
>> on the other hand the principle is similar (direct rendering from hw
>> video codecs). And a lot infrastructure of connection, authentication,
>> is same. So there are two options, either extend DRI2 or add a new
>> protocol which duplicates some parts. I'd like to consider extending
>> DRI2 first, but if people think the requirements for video are too
>> much different from 3d, then I could split this into a new protocol.
>> + DRI2SetAttribute
>> + drawable: DRAWABLE
>> + attribute: ATOM
>> + value: INT32
>> + ▶
>> + Errors: Window, Match, Value
>> + The DRI2SetAttribute request sets the value of a drawable attribute.
>> + The drawable attribute is identified by the attribute atom. The
>> + following strings are guaranteed to generate valid atoms using the
>> + InternAtom request.
>> + String Type
>> + -----------------------------------------------------------------
>> + "XV_ENCODING" ENCODINGID
>> + "XV_HUE" [-1000..1000]
>> + "XV_SATURATION" [-1000..1000]
>> + "XV_BRIGHTNESS" [-1000..1000]
>> + "XV_CONTRAST" [-1000..1000]
>> + "XV_WIDTH" [0..MAX_INT]
>> + "XV_HEIGHT" [0..MAX_INT]
>> + "XV_OSD" XID
>> + If the given attribute doesn't match an attribute supported by the
>> + drawable a Match error is generated. The supplied encoding
>> + must be one of the encodings listed for the adaptor, otherwise an
>> + Encoding error is generated.
>> + If the adaptor doesn't support the exact hue, saturation,
>> + brightness, and contrast levels supplied, the closest levels
>> + supported are assumed. The DRI2GetAttribute request can be used
>> + to query the resulting levels.
>> + The "XV_WIDTH" and "XV_HEIGHT" attributes default to zero, indicating
>> + that no scaling is performed and the buffer sizes match the drawable
>> + size. They can be overriden by the client if scaling is desired.
>> + The "XV_OSD" attribute specifies the XID of a pixmap containing
>> + ARGB data to be non-destructively overlayed over the video. This
>> + could be used to implement subtiles, on-screen-menus, etc.
>> + : TODO: Is there a need to support DRI2SetAttribute for non-video
>> + : DRI2DRIVER types?
>> + :
>> + : TODO: Do we need to keep something like PortNotify.. if attributes
>> + : are only changing in response to DRI2SetAttribute from the client,
>> + : then having a PortNotify like mechanism seems overkill. The assumption
>> + : here is that, unlike Xv ports, DRI2 video drawables are not a limited
>> + : resource (ie. if you run out of (or don't have) hardware overlays, then
>> + : you use the GPU to do a colorconvert/scale/blit). So there is not a
>> + : need to share "ports" between multiple client processes.
> Are you targeting/limiting this to a particular API (or the customary
> limitations of overlay HW)? I ask because VDPAU allows clients to pass
> in an arbitrary colour conversion matrix rather than color
> standard/hue/sat/bri/con, so it wouldn't be possible to use this in
> that context.
Ideally it would something that could be used either from
device-dependent VDPAU or VAAPI driver back-end, or something that
could be used in a generic way, for example GStreamer sink element
that could be used with software codecs.
Well, this is the goal anyways. There is one slight other
complication for use in a generic way.. it would need to be a bit
better defined what the buffer 'name' is, so that the client side
would know how to interpret it, mmap it if needed. But I think there
is a solution brewing:
As far as color conversion matrix... well, the attribute system can
have arbitrary device-dependent attributes. In the VDPAU case, I
suppose the implementation on the client side knows which xorg driver
it is talking to, and could introduce it's own attributes. Perhaps a
bit awkward for communicating a matrix, but you could in theory have
4*3 different attributes (ie. XV_M00, XV_M01, ... XV_M23) for each
entry in the matrix.
> Also in general, their compositing API is a lot more
> flexible and allows for a background + multiple layers, rather than
> just a single layer. I suppose you could pre-flatten the layers into a
> single one, but the background would be problematic.
Yeah, pre-flatten into a single layer, I think. I mean, we *could*
push that to xorg driver side too, but I was trying to not make things
overly complicated in the protocol.
I'm not sure I caught the issue about background.. or are you
thinking about video w/ AYUV? Is there any hw out there that supports
overlays w/ YUV that has an alpha channel? If this is enough of a
weird edge case, maybe it is ok to fall back in these cases to the old
way of doing the blending on the client side and just looking like a
3d app to the xorg side. (I suspect in this sort of case you'd end up
falling back to the GPU on the xorg side otherwise.) But I'm always
interested to hear any other suggestions.
> VA on the other hand lets clients query for matrix and h/s/b/c
> attribute support and seems to have a simpler compositing API, so it
> seems doable with this, and of course Xv does.
> xorg-devel at lists.x.org: X.Org development
> Archives: http://lists.x.org/archives/xorg-devel
> Info: http://lists.x.org/mailman/listinfo/xorg-devel
More information about the xorg-devel