XDC: randr extension for full display state setup (full set of outputs and crtcs in one shot)

Adam Jackson ajax at nwnk.net
Tue Oct 6 08:07:02 PDT 2009


On Mon, 2009-10-05 at 19:52 -0700, Keith Packard wrote:
> > On Mon, 2009-10-05 at 20:12 -0400, Alex Deucher wrote:
> > > 
> > > You may have two or more huge monitors connected to a
> > > low-end card that can't drive both at full res due to bandwidth
> > > limitations.
> 
> It seems to me that the actual requirement here is that the client
> needs to know which configurations are possible, and that doing the
> mode set atomically isn't actually relevant. As long as the client can
> discover the set of possible mode combinations, setting the selected
> configuration can use existing RandR protocol.

I think you do still want an atomic setup call.  Otherwise, in the
general case, you have to tear down existing CRTC state to get to the
desired CRTC state, which means sending (at least) twice as many RANDR
events to clients.

Granted this is already a thundering herd, since as a side effect of how
gobject signals work, every gtk app on the system wakes up for every
RANDR event.  But there's no reason to make it seven thundering herds if
we don't have to.

> Adding protocol to expose the possible combinations of modes seems
> fairly easy to me; getting applications to use that may be more of a
> challenge. That seems like a fairly simple N-dimensional boolean
> array. If you have four outputs with 20 modes each, that's 160000
> bits. Alternatively, you could set up some way to query the system for
> a subset of this array.

So, given the choice between an API that's simple for applications, and
one that's complicated for applications, you prefer the one that's
complicated?

> I have to say that I'm a little surprised that graphics hardware has
> these kinds of limits nowadays; a 2560x1600 display only consumes
> about 1GB/sec of memory bandwidth. Don't modern GPUs have around
> 100GB/s of memory bandwidth available?

Yours might.  If you set up the watermarks correctly, you might even get
that much reliably.  But:

a) There exists plenty of current hardware with anemic bandwidth
capabilities, because it's cheaper.  xf86ModeBandwidth() was introduced
for _new_ server chips, not for cirrus.

b) Just because the memory is that fast, doesn't mean the CRTC can get
that much out of the memory controller.  As foolish as it may seem, some
people really have made chips where the scanout engine can lose
arbitration to the rendering engine.  Or, to the CPU.

c) In a GPGPU scenario, available bandwidth is not purely a function of
whatever the currently-active X server happens to think it's got.

d) There are plenty of older chips we'd like to still support.

- ajax
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 198 bytes
Desc: This is a digitally signed message part
Url : http://lists.x.org/archives/xorg-devel/attachments/20091006/e5486398/attachment.pgp 


More information about the xorg-devel mailing list