RandR 1.2 feedback

Andy Ritger aritger at nvidia.com
Wed Nov 29 01:20:03 PST 2006


Thanks for the feedback, Keith.  Sorry for the slow response.  Comments below:

On Fri, 24 Nov 2006, Keith Packard wrote:

> On Wed, 2006-11-22 at 17:38 -0800, Andy Ritger wrote:
>
>> - It would be nice if the specification tracked the list of modes
>>    per output.  Rather than have a single list of modes for the X screen,
>>    and then have each output reference whichever modes are valid for that
>>    output, it may make more sense to just store the modes per output.
>
> Right, the reason I didn't do it this way is to support 'clone' mode
> where a single crtc can drive multiple outputs. Having per-output mode
> lists would make this problematic.
>
>>    One advantage is that the user can request a mode named "auto-select",
>>    and each output could have a different mode with that name.
>
> Is this not well supported with the existing 'preferred' mode stuff? If
> not, could we support it with an output property which labeled the
> 'auto-select' mode?
>
>>    I suppose that if you ever wanted to associate additional properties
>>    with modes, but if those properties should be different per output,
>>    then storing the modes per output would make this easier.
>
> No, I've actually pared the modes down to the basics so I could support
> clone mode as described above.
>
>>    The downside of per-output modelists is that you end up with some
>>    duplication of modes that are valid for each output.  I wouldn't
>>    consider that a big deal, though.
>
> Except for clone mode, I would agree. The unfortunate thing is that I
> have many older chipsets which have only a single CRTC and multiple
> outputs; without clone mode, I can drive only one output at a time.

That's a good point about clone mode not making per-output modes feasible.
Perhaps the modes should be per-CRTC, but not a big deal.  The per-screen
mode list is workable, so I'll leave that topic alone.

>>    One major downside is that this doesn't give an implementation a
>>    good opportunity to perform validation of the complete system.
>>    The CRTC vs output distinction expose some specific hardware
>>    capabilities/limitations to the client.  That's fine, except that
>>    there may be other restrictions.  For example, things like video memory
>>    bandwidth or TMDS Links may limit the combinations of modes that you
>>    can use on different outputs at the same time.
>
> I'd like to get a better idea of actual limits here; looking over the
> Intel docs, I was unable to discover any combination which could be
> plugged together which wouldn't work. But, I only have two crtcs.

I suppose the limits would be anything that is a shared resource between
the CRTCs.  Memory bandwidth is the only good example that comes to mind:
if the memory subsystem of the graphics hardware has constraints such
that it cannot feed all CRTCs simultaneously when all CRTCs run at
their maximum clks, then that's a limit that cannot be validated just
by looking at the mode on one output by itself.  Granted, that's not a
likely scenario.  I'm more concerned about the system contraints that
we can't foresee today.

> And, encapsulating the configuration in a container data structure
> doesn't really solve the problem; you still have no way of describing
> "why" a configuration can't be used.

Yes, the "why" feedback is missing, but providing the feedback seems
like a separate issue than identifying that the currently requested
configuration cannot be fulfilled.

Is it better for the client to

     - resize the root window
     - set a mode on output A
     - set a mode on output B

but have the output B mode fail because there is some conflict with what
is already setup on output A, or have the client say, "I'd like to
register a configuration that has:

     - a particular sized root window,
     - a particular mode on output A, and
     - a particular mode on output B"

and have the registering of that configuration fail?

> Anything you can set atomically can be set incrementally.

Oh, sorry, I was suggesting to replace the incremental assignments by a
single atomic assignment, giving the implementation a central place to
perform its validation.  i.e., remove the individual requests:

     RRSetScreenSize
     RRSetCrtcConfig

and replace them new requests, roughly like this:

     RRAddScreenConfiguration: contains everything from SetScreenSize
         and SetCrtcConfig for each CRTC; implementation validates
         everything, and if the configuration is valid, puts it in a list;
         this would not actually make the requested configuration active

     RRSwitchScreenConfiguration: switches to one of the valid
         configurations, making it active

     RRDeleteScreenConfiguration: deletes an existing configuration;
         cannot be the one currently in use

>>    Exposing CRTC vs output in the spec seems OK, but seems insufficient
>>    to reflect all hardware restrictions; and I think hardware changes
>>    too rapidly to realistically reflect all the various limitations that
>>    hardware might have.
>
> Yup. I punted and let the driver just say "sorry, can't do that Dave".
>
>>    This is nice because adding a new MetaMode gives the implementation
>>    a central place to perform any needed validation, and then you have
>>    a higher likelihood of later being able to fullfill the request to
>>    switch to that MetaMode.
>
> Sorry, I can't see how this makes it 'more likely'. The 'usual' way to
> set a configuration is to turn everything off, set the screen size and
> then add each crtc/output combination. As these appear incrementally,
> each partial setup may require dramatic re-configuration of the
> hardware, but nothing more complicated than your metamode notions.

My concern is that each incremental step requires reconfiguration and
revalidation, such that each incremental step is a potential point
of failure.  Whereas, if the complete desired configuration had been
specified earlier with an RRAddScreenConfiguration-like API, then
(most? all?) the validation will have already been done before we start
applying any parts of the new configuration.

If we knew the complete configuration before starting the adventure of
applying the new configuration, there should be fewer potential points
of failure once we start applying the new config.

>>  A MetaMode is also a nice abstraction for
>>    backwards compatibility with RandR 1.1 and XF86VidMode -- they just
>>    see the MetaMode as a single mode.  Lastly, keeping multiple complete
>>    screen configurations around makes it easy to return to the previous
>>    config, if the user wants to revert his changes (or you present an
>>    "are these new settings OK?" dialog with a 10 second timeout).
>
> What we could add is some 'save/commit/revert' mechanism so that partial
> reconfigurations interrupted by client termination wouldn't break the
> user environment. I'm not that concerned by this though; there are lots
> of bits of our environment which depend on well-behaved applications.

Yeah, a 'save/commit/revert' mechanism is probably a lot of work just
to handle the abnormal client termination case.

>> - Other minor stuff:
>>
>>      - Should DPI be queriable per-output; I know the core X protocol
>>        provides a single WidthMM, HeightMM per X screen, but it might be
>>        useful to allow aware applications to query the DPI with per-output
>>        granularity.
>
> You'll note that outputs have a mm_width/mm_height value.

That sounds perfect.  However, when I went through the spec, I didn't
see mm_width/mm_height values in the outputs.  I see mm width/height in:

     SCREENSIZE
     MODEINFO
     RRSetScreenSize
     RRScreenChangeNotify

am I missing where it is also specified for outputs?

Also, does mm_width/mm_height make sense in the MODEINFO?  Isn't the
physical size a function of mode+output?  The same mode could have
drastically different physical size on different outputs.


>>      - The spec should probably be clear that just because the width/height
>>        in a RRSetScreenSize request is within the min/max range reported by
>>        RRGetScreenSizeRange, we're not guaranteed to be able to fullfill
>>        that.  Video memory constraints, or other hardware constraints may
>>        come into play that cannot be reflected completely by the min/max
>>        values reported by RRGetScreenSizeRange (I assume that range is
>>        intended to report the maximum renderable sizes?)
>
> No, all widths/heights are always configurable -- it's just clipping
> after all. In the XAA-implementation, we don't resize the frame buffer
> at all, we just change the root window clip list (yes, this sucks).

OK.

>>      - I believe TwinView/MergedFB is different than Xinerama, and
>>        that they solve slightly different problems.  RandR 1.2 solves
>>        the problem of querying/configuring outputs on an X screen on a
>>        single GPU.  But having a Xinerama X screen across multiple GPUs
>>        is slightly different -- if each GPU stores only a portion of the
>>        entire Xinerama X screen, the outputs connected to that GPU are
>>        limited in which portions of the Xinerama X screen they can display.
>
> You're conflating the protocol extension Xinerama with the DIX-level
> code which can be used to support that extension (and uses the same
> name). Yes, we still need the DIX-level code to span GPUs, and, no, I
> haven't bothered to make that all work together nicely yet.
>
> I don't like the DIX-level code as it is horribly inefficient, but
> without major restructuring of the DIX/DDX interface, we can't do a lot
> better at this point.
>
> I'm afraid its a bit of 'someone else's problem' at this point as I
> can't plug multiple Intel graphics cards into a machine. If you're
> interested in working out how the DIX-level Xinerama code could be used
> with RandR 1.2, I'd love to help out.

OK.  I'll leave this topic alone for the short term, since it seems it
will be a contentious issue.  Definitely not something for RandR 1.2;
I was just trying to identify if the 1.2 spec would limit multi-GPU X
screens in a future version of the RandR spec.

X screens spanning GPUs is a hot topic for NVIDIA customers, so I'll
try to pick this up.  Maybe I can organize an XDevConf '07 talk on this.

>>        This is a post-1.2 RandR issue, but my vote would be to expose
>>        the underlying physical X screens through RandR when Xinerama
>>        is enabled.
>
> Right, Xinerama should disappear and be replaced with RandR, which can
> expose the mixture of underlying GPU/Outputs.
>
>>      - Should a future version of the RandR spec allow clients to talk
>>        in terms of the physical X screens underlying a Xinerama X screen?
>>        If so, then perhaps the requests
>
> Yeah, for direct rendering support, we may need to expose the GPU as
> well. For core 2D stuff, I don't see a reason.
>
> In particular:
>
>>          RRGetScreenSizeRange
>>          RRSetScreenSize
>>          RRGetScreenResources
>>          RRCreateMode
>
> would talk about the global multi-GPU 'screen' and not the per-GPU frame
> buffer.
>
> Without exposing the GPU itself, we may have a harder time dealing with
> CRTC positioning though; the GPU code would have to dynamically
> reallocate the frame buffer as CRTCs moved around within its space. Ick.
>
>>        should take a screen index, rather than a WINDOW?  I assume in
>>        Xinerama there is one Window for the entire Xinerama X screen,
>>        whereas we may want finer granularity in the future?
>
> I think the global RandR structures should talk about a single screen
> and that if we need to expose the underlying GPU for frame buffer
> allocation and direct rendering issues, we can add that information
> separately.
>
> Thanks much for your careful review. I like your notion of listing modes
> per output, and except for clone mode it would be nice. If the existing
> preferred modes mechanism isn't sufficient to support your 'auto-select'
> stuff, let's come up with a property convention that you can use for
> this.

I'll have to do a bit more research into the existing preferred modes
mechanism, but I think the current per-screen modelist is workable.

> Also, let me know if I've missed something in the metamodes discussion,
> from what I can see, it wouldn't add any additional error reporting
> (which would be nice), but some kind of mechanism to revert to a
> previous mode on client crash might be nice. A similar mechanism to deal
> with video-mode switching games would also be useful.

Hopefully I've clarified the MetaMode stuff.  Rather than incrementally
change various parameters of the current configuration, the MetaMode
approach would give clients a way to manage a list of screen
configurations (encapsulating screen size, modes on each output, position
of each mode, etc).  This would give the implementation a chance to do
any cross-output validation before any changes were actually applied to
the hardware.  By itself, the MetaMode approach doesn't add any extra
error reporting, but if we could figure out how to express the errors
(enum list, even return a non-localized error string... ick), this would
be the central place where those sorts of errors would be generated.

Then, changing the current screen configuration could be an atomic
operation; I think this would address the concern about abnormal
termination of the client.

> A similar mechanism to deal
> with video-mode switching games would also be useful.

What do you mean by that?  What does an RandR 1.2 implementation look
like to a video-mode switching games using XF86VidMode or RandR 1.1?
Does such a client effectively just see the mode on the "first" output?
The MetaMode approach makes the entire screen configuration look like
one mode to XF86VidMode or RandR 1.1 clients.

Thanks,
- Andy


> -- 
> keith.packard at intel.com
>



More information about the xorg mailing list