dummy driver and maximum resolutions, config hacks via LD_PRELOAD, etc

Adam Jackson ajax at nwnk.net
Wed Apr 6 07:13:14 PDT 2011


On 4/6/11 6:51 AM, Antoine Martin wrote:

> 1) I can't seem to make it use resolutions higher than 2048x2048 which
> is a major showstopper for me:
> Virtual height (2560) is too large for the hardware (max 2048)
> Virtual width (3840) is too large for the hardware (max 2048)
>
> Seems bogus to me, I've tried giving it more ram, giving it a very wide
> range of vsync and hsync, added modelines for these large modes, etc
> No go.

It is bogus, the driver has an arbitrary limit.  Look for the call to 
xf86ValidateModelines in the source, and compare that to (for example) 
what the vesa driver does.

> Wish-list: it would also be nice not to have to specify Modelines for
> the "dummy monitor" since it should be able to handle things like
> 3840x2560 as long as enough RAM is allocated to it, right?
> I had to add this one to get 2048x2048:
> Modeline "2048x2048 at 10" 49.47 2048 2080 2264 2296 2048 2097 2101 2151

It should, but...

> 2) If I resize this dummy screen with randr, do I save any memory or cpu
> usage during rendering? Are there any benefits at all?
> It doesn't seem to have any effect on the process memory usage, I
> haven't measured CPU usage yet but I assume there will be at least some
> savings there. (the scenario that interests me is just one application
> running in the top-left corner - does the unused space matter much?)
> I may have dozens of dummy sessions, so savings would add up.

The dummy driver is still using the pre-randr-1.2 model where the 
framebuffer is statically allocated up front.  It would need a bit of 
work to port to the new model.  Once you'd done that, though, you'd 
pretty much just let it start at whatever resolution it wanted, and then 
feed in the desired size with xrandr --addmode at runtime.

You'd probably find that the difference in CPU usage was marginal at 
best, you'd only win with smaller framebuffers to the extent that things 
fit in cache better.  But the "unused" space does matter.  We do an 
initial paint of black on the root window, which means all the pages are 
going to be real and not just implied maps of /dev/zero.

> 3) Are there any ways of doing what the LD_PRELOAD hacks from Xdummy*
> do, but in a cleaner way? That is:
> * avoid vt switching completely

-novtswitch and possibly also -sharevts.  That bit of the problem is a 
bit icky though, it really needs someone to think through the design more.

> * avoid input device probing (/dev/input/*)

I think, though I am not sure, that the trick I put in place forever ago 
of explicitly loading the 'void' input driver will turn that off.

> * load config files from user defined locations (not /etc/X11)
> * write log file to user define location (not /var/log)

man xorg.conf, look for the bits about -config and -logfile.

> 4) Acceleration... Now this last bit really is a lot more far fetched,
> maybe I am just daydreaming.
> Wouldn't it be possible to use real graphics cards for acceleration, but
> without dedicating it to a single Xdummy/Xvfb instance?
> What I am thinking is that I may have an under-used graphics card in a
> system, or even a spare GPU (secondary card) and it would be nice
> somehow to be able to use this processing power from Xdummy instances. I
> don't understand GEM/Gallium kernel vs X server demarcation line, so
> maybe the card is locked to a single X server and this is never going to
> be possible.

Possible, not a near term kind of project yet.

- ajax



More information about the xorg mailing list