[PULL] Add VDPAU drivers to the server's DRI2 implementation
Aaron Plattner
aplattner at nvidia.com
Tue Oct 27 12:42:50 PDT 2009
On Mon, Oct 26, 2009 at 02:58:50PM -0700, Kristian Høgsberg wrote:
> 2009/10/26 Aaron Plattner <aplattner at nvidia.com>:
> > On Mon, Oct 26, 2009 at 10:45:31AM -0700, Kristian Høgsberg wrote:
> >> 2009/10/26 Aaron Plattner <aplattner at nvidia.com>:
> >> > On Sat, Oct 24, 2009 at 08:56:11AM -0700, Kristian Høgsberg wrote:
> >> >> On Fri, Oct 23, 2009 at 8:13 PM, Aaron Plattner <aplattner at nvidia.com> wrote:
> >> >> > Hi Keith,
> >> >> >
> >> >> > These changes add VDPAU driver name registration support to the X server.
> >> >> > I extended the driver type mechanism to allow a full DRI2InfoRec for VDPAU
> >> >> > drivers instead of just a name so that other drivers can return information
> >> >> > through the deviceName parameter. libvdpau currently doesn't have a way of
> >> >> > passing the deviceName through to the backend driver, but we could add a
> >> >> > way to do that if it becomes necessary.
> >> >>
> >> >> I don't think we need a new DRI2InfoRec for this. Can we just add a
> >> >> field that's an array of driver names instead and a field that gives
> >> >> the length of that array? Do you need a new device name for vdpau?
> >> >
> >> > Well, what I really need is a way to get the driver name into libvdpau
> >> > without also specifying that my driver implements the DRI2 interface. The
> >> > NVIDIA driver doesn't need to specify a device name at all: the VDPAU
> >> > backend library determines that through our private protocol. I don't
> >> > really have a good sense of what a DRI2 VDPAU driver would look like
> >> > architecturally, but my assumption was that it would call DRI2Connect with
> >> > DRI2DriverDRI like any other DRI client would, after libvdpau used
> >> > DRI2DriverVDPAU to get the appropriate libvdpau_*.so backend name. Would
> >> > it make sense to have a single DRI2Rec with one name, one device path, and
> >> > a bitmask of supported driver types? E.g.
> >>
> >> Do you have a DRI2 based VDPAU driver or are you just putting the
> >> infrastructure in place for potential VDPAU drivers? If you don't
> >> have a driver on the way, maybe it makes more sense to add the DRI2
> >> protocol for VDPAU when
> >> we have an actual driver?
> >
> > No, I don't. I was just trying to put the infrastructure in place.
> > Currently, libvdpau reads an environment variable to determine which driver
> > to load and if that's not set, defaults to "nvidia". That's obviously not
> > suitable for DRI2 drivers and this is what was suggested at XDC.
>
> Ok, that's a good point, and using DRI2 to figure out which driver to
> load seems like a good use of the API. Just curious though, since
> you're not using the DRI2GetBuffers and DRI2CopyRegion requests I
> assume you're using the same mechanism as you do for GL to copy
> contents into the X server buffers?
That's right, we have our own internal mechanism for doing something
similar.
> >> > static const DRI2DriverRec nvidiaVDPAUDriver = {
> >> > .version = DRI2INFOREC_VERSION,
> >> > .fd = -1,
> >> > .driverName = "nvidia",
> >> > .driverTypeMask = (1 << DRI2DriverVDPAU),
> >> > };
> >> >
> >> > Presumably a driver with a mask of (1 << DRI2DriverDRI) | (1 <<
> >> > DRI2DriverVDPAU) could use the same name for both kinds of drivers.
> >> >
> >> > I could also just add completely new handling for DRI2DriverVDPAU such that
> >> > it only contains a name. That's what I did originally, but figured
> >> > somebody might want the extra flexibility. I'll implement whichever
> >> > interface you guys prefer.
> >>
> >> The only thing DRI2 should do in this case is to give out a different
> >> driver name for different driver types. The thinking is that for
> >> different driver types (GL, VDPAU, VAAPI, cairo-drm, etc), the
> >> implementation may be split into differently named .so's and may even
> >> group the chip support differently. For example, for mesa, the intel
> >> driver is either i915_dri.so or i965_dri.so, but cairo-drm may just
> >> load cairo-drm-intel.so for all chipsets (just an example). The drm
> >> device file to use is the same in all cases though.
> >
> > Oh, eww. I didn't realize that's how driverType was intended to be used.
> > For drivers that use DRI2 as their backend implementation to talk to the
> > hardware (e.g. the Mesa/Gallium GL drivers or Cairo-DRM), I assumed the
> > server wouldn't really care what they were and they'd just go through the
> > same generic DRI2DriverDRI interface.
> >
> >> Right now, the DRI2 interface in the server doesn't support this,
> >> since it only allows one driver name to be specified in the
> >> DRI2InfoRec. That's an oversight on my part. But we can fix it by
> >> adding an array of driver names to the struct and a length field. If
> >> the requested driver type passed in in DRI2Connect is outside the
> >> array or the driver name entry is NULL, that driver type is not
> >> supported and we can return BadValue. Otherwise, we fill out the
> >> return values and proceed as normal.
> >
> > So I'd just pass in .driverNames = { NULL, "nvidia" }, .driverCount = 2?
> > I'll need to go through an make do_get_buffers and friends check whether
> > ds->{Create,Destroy}Buffer, etc. are NULL if we go that route since those
> > requests won't be implemented.
>
> Yup, exactly.
Okay, I'll try to code that up along with the various NULL checks sometime
soon.
Thanks a lot for your feedback.
-- Aaron
More information about the xorg-devel
mailing list