RFC: video with DRI2

Younes Manton younes.m at gmail.com
Wed Jul 20 21:26:49 PDT 2011


On Wed, Jul 20, 2011 at 11:40 PM, Rob Clark <robdclark at gmail.com> wrote:
> On Wed, Jul 20, 2011 at 5:53 PM, Younes Manton <younes.m at gmail.com> wrote:
>> On Wed, Jul 20, 2011 at 6:28 PM, Corbin Simpson
>> <mostawesomedude at gmail.com> wrote:
>>> On Wed, Jul 20, 2011 at 3:15 PM, Rob Clark <robdclark at gmail.com> wrote:
>>>>
>>>> Anyone have some opinions on the best approach to take?  Anyone else
>>>> given some thought to this sort of thing before?
>>>>
>>>
>>> If I recall correctly, DRI2 can transport VDPAU and there is some
>>> libvdpau stuff in the Mesa/Gallium source tree. I haven't really been
>>> heavily involved, but I would imagine that that might be interesting
>>> to you.
>>>
>>> ~ C.
>>>
>>
>> The code in mesa does everything client side since overlays are pretty
>> much extinct so doesn't really have a need for any of this, although
>> more control over the swap chain might be useful.
>>
>
> yeah, and therein lies the challenge.. ;-)
>
> I'm dealing with smaller embedded devices (ie. that runs off
> batteries) where overlays are actually useful from power/efficiency
> standpoint.  But at same time video decode hw has some requirements
> that normal shmem allocated buffers don't meet so Xv is not terribly
> helpful.  The aspect of DRI2 of just getting a GEM buffer handle
> shared between xorg and client is 90% of what I need.
>
> BR,
> -R
>

Understood. Since you mentioned the need for 16 buffers, B frames,
zero-copy, etc, it seems these frames are not only for display but
also ref frames. Doesn't that mean you actually a couple of other
things as well?

1. Not just cropping and larger buffers but actually buffer sizes that
are independent of the drawable's size and an interface that can
specify any mapping of buffer -> drawable and some notion of a
background color to cropping/scaling up/scaling down? Or are decoded
frames guaranteed to be drawable-sized + a bit extra at the edges?
2. A way to specify a standard for color conversion, or preferably, an
arbitrary matrix? Or are you thinking a single fixed standard?
3. Some kind of OSD (i.e. compositing, with both YUV and RGB surfaces)
support? Or is that not required?


More information about the xorg-devel mailing list