xf86-video-tegra or xf86-video-modesetting?

Thierry Reding thierry.reding at avionic-design.de
Mon Nov 26 00:13:21 PST 2012


On Mon, Nov 26, 2012 at 05:45:38PM +1000, Dave Airlie wrote:
> On Mon, Nov 26, 2012 at 5:32 PM, Thierry Reding
> <thierry.reding at avionic-design.de> wrote:
> > On Sun, Nov 25, 2012 at 09:51:46PM -0500, Alex Deucher wrote:
> >> On Sat, Nov 24, 2012 at 4:09 PM, Thierry Reding
> >> <thierry.reding at avionic-design.de> wrote:
> >> > going into Linux 3.8 and NVIDIA posting initial patches
> >> > for 2D acceleration on top of it, I've been looking at the various ways
> >> > how this can best be leveraged.
> >> >
> >> > The most obvious choice would be to start work on an xf86-video-tegra
> >> > driver that uses the code currently in the works to implement the EXA
> >> > callbacks that allow some of the rendering to be offloaded to the GPU.
> >> > The way I would go about this is to fork xf86-video-modesetting, do some
> >> > rebranding and add the various bits required to offload rendering.
> >> >
> >> > However, that has all the usual drawbacks of a fork so I thought maybe
> >> > it would be better to write some code to xf86-video-modesetting to add
> >> > GPU-specific acceleration on top. Such code could be leveraged by other
> >> > drivers as well and all of them could share a common base for the
> >> > functionality provided through the standard DRM IOCTLs.
> >> >
> >> > That approach has some disadvantages of its own, like the potential
> >> > bloat if many GPUs do the same. It would also be a bit of a step back
> >> > to the old monolithic days of X.
> >>
> >> Just fork and fill in your own GPU specific bits.  Most accel stuff
> >> ends up being very GPU specific.
> >
> > That doesn't exclude the alternative that I described. Maybe I didn't
> > express what I had in mind very clearly. What I propose is to add some
> > code to the modesetting driver that would allow GPU-specific code to be
> > called if matching hardware is detected (perhaps as stupidly as looking
> > at the DRM driver name/version). Such code could perhaps be called from
> > the DDX' .ScreenInit and call the GPU-specific function to register an
> > EXA driver.
> >
> > That would allow a large body of code (modesetting, VT switching, ...)
> > to be shared among a number of drivers instead of duplicating the code
> > for each one and having to keep merging updates from the modesetting
> > driver as it evolves. So the GPU-specific acceleration would just sit on
> > top of the existing code and only be activated on specific hardware.
> > What I'm *not* proposing is to create an abstraction layer for
> > acceleration.
> >
> 
> vmware kinda did something like that initially with modesetting, it
> was a bit messier, it would be nice though to just plug in stuff like
> glamor and things, but you still need to deal with pixmap allocation
> on a per-gpu basis.

I'm still very new to this game and I probably have a lot of catching up
to do. However I would expect it to be possible to override pixmap
allocation with GPU specific implementations. I've been looking at some
implementations of DDX drivers and I seem to remember that pixmap
management was done in the EXA driver, in which case it would be part of
the GPU specific code anyway.

> I'd rather make the modesetting code into a library, either separate
> or in the X server, but i've also investigated that and found it was
> too much effort for me at the time also.

That idea occurred to me as well. Given my lack of experience I'm not
sure I'd be very well suited for the job if you already judged it to be
too much effort...

Thierry
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 836 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20121126/4be595b5/attachment.pgp>


More information about the xorg-devel mailing list