[PATCH] RFCv2: video support for dri2
Younes Manton
younes.m at gmail.com
Thu Sep 1 15:22:30 PDT 2011
On Thu, Sep 1, 2011 at 4:52 PM, Rob Clark <rob at ti.com> wrote:
> To allow the potential use of overlays to display video content, a few
> extra parameters are required:
>
> + source buffer in different format (for example, various YUV formats)
> and size as compared to destination drawable
> + multi-planar formats where discontiguous buffers are used for
> different planes. For example, luma and chroma split across
> multiple memory banks or with different tiled formats.
> + flipping between multiple back buffers, perhaps not in order (to
> handle video formats with B-frames)
> + cropping during swap.. in case of video, perhaps the required hw
> buffers are larger than the visible picture to account for codec
> borders (for example, reference frames where a block/macroblock
> moves past the edge of the visible picture, but back again in
> subsequent frames).
>
> Current solutions use the GPU to do a scaled/colorconvert into a DRI2
> buffer from the client context. The goal of this protocol change is
> to push the decision to use overlay or GPU blit to the xorg driver.
>
> In many cases, an overlay will avoid several passes through memory
> (blit/scale/colorconvert to DRI back-buffer on client side, blit to
> front and fake-front, and then whatever compositing is done by the
> window manager). On the other hand, overlays can often be handled
> directly by the scanout engine in the display hardware, with the GPU
> switched off.
>
> The disadvantages of overlays are that they are (usually) a limited
> resource, sometimes with scaling constraints, and certainly with
> limitations about transformational effects.
>
> The goal of combining video and dri2 is to have the best of both worlds,
> to have the flexibility of GPU blitting (ie. no limited number of video
> ports, no constraint about transformational effects), while still having
> the power consumption benefits of overlays (reduced memory bandwidth
> usage and ability to shut off the GPU) when the UI is relatively
> static other than the playing video.
>
> Note: video is not exactly the same as 3d, there are a number of other
> things to consider (scaling, colorconvert, multi-planar formats). But
> on the other hand the principle is similar (direct rendering from hw
> video codecs). And a lot infrastructure of connection, authentication,
> is same. So there are two options, either extend DRI2 or add a new
> protocol which duplicates some parts. I'd like to consider extending
> DRI2 first, but if people think the requirements for video are too
> much different from 3d, then I could split this into a new protocol.
...
> +┌───
> + DRI2SetAttribute
> + drawable: DRAWABLE
> + attribute: ATOM
> + value: INT32
> + ▶
> +└───
> + Errors: Window, Match, Value
> +
> + The DRI2SetAttribute request sets the value of a drawable attribute.
> + The drawable attribute is identified by the attribute atom. The
> + following strings are guaranteed to generate valid atoms using the
> + InternAtom request.
> +
> + String Type
> + -----------------------------------------------------------------
> +
> + "XV_ENCODING" ENCODINGID
> + "XV_HUE" [-1000..1000]
> + "XV_SATURATION" [-1000..1000]
> + "XV_BRIGHTNESS" [-1000..1000]
> + "XV_CONTRAST" [-1000..1000]
> + "XV_WIDTH" [0..MAX_INT]
> + "XV_HEIGHT" [0..MAX_INT]
> + "XV_OSD" XID
> +
> + If the given attribute doesn't match an attribute supported by the
> + drawable a Match error is generated. The supplied encoding
> + must be one of the encodings listed for the adaptor, otherwise an
> + Encoding error is generated.
> +
> + If the adaptor doesn't support the exact hue, saturation,
> + brightness, and contrast levels supplied, the closest levels
> + supported are assumed. The DRI2GetAttribute request can be used
> + to query the resulting levels.
> +
> + The "XV_WIDTH" and "XV_HEIGHT" attributes default to zero, indicating
> + that no scaling is performed and the buffer sizes match the drawable
> + size. They can be overriden by the client if scaling is desired.
> +
> + The "XV_OSD" attribute specifies the XID of a pixmap containing
> + ARGB data to be non-destructively overlayed over the video. This
> + could be used to implement subtiles, on-screen-menus, etc.
> +
> + : TODO: Is there a need to support DRI2SetAttribute for non-video
> + : DRI2DRIVER types?
> + :
> + : TODO: Do we need to keep something like PortNotify.. if attributes
> + : are only changing in response to DRI2SetAttribute from the client,
> + : then having a PortNotify like mechanism seems overkill. The assumption
> + : here is that, unlike Xv ports, DRI2 video drawables are not a limited
> + : resource (ie. if you run out of (or don't have) hardware overlays, then
> + : you use the GPU to do a colorconvert/scale/blit). So there is not a
> + : need to share "ports" between multiple client processes.
Are you targeting/limiting this to a particular API (or the customary
limitations of overlay HW)? I ask because VDPAU allows clients to pass
in an arbitrary colour conversion matrix rather than color
standard/hue/sat/bri/con, so it wouldn't be possible to use this in
that context. Also in general, their compositing API is a lot more
flexible and allows for a background + multiple layers, rather than
just a single layer. I suppose you could pre-flatten the layers into a
single one, but the background would be problematic.
VA on the other hand lets clients query for matrix and h/s/b/c
attribute support and seems to have a simpler compositing API, so it
seems doable with this, and of course Xv does.
More information about the xorg-devel
mailing list