[PATCH] randr: add provider object (v7)

Dave Airlie airlied at gmail.com
Tue Jul 3 04:57:59 PDT 2012


(forgot list first time I sent this).

On Mon, Jul 2, 2012 at 10:03 PM, Keith Packard <keithp at keithp.com> wrote:
> Dave Airlie <airlied at gmail.com> writes:
>
>> On Mon, Jul 2, 2012 at 8:54 PM, Keith Packard <keithp at keithp.com> wrote:
>>> Dave Airlie <airlied at gmail.com> writes:
>>>
>>>
>>>> Yes thats going to be the standard on switchable GPU machines, two masters
>>>> and the ability to jump between them. In that case max master is one, and you'd
>>>> have to set the provider roles. If maxmaster > 1 then xinerama
>>>> emulation is available.
>>>
>>> In those machines, isn't the limitation that only one of them can drive
>>> the LVDS panel at a time? Can't you have the internal GPU driving the
>>> LVDS while the external GPU drives other outputs?
>>
>> Yes but you don't configure it in xinerama mode for that, there are
>> mux and muxless configurations.
>
> I think I understand what the hardware does now, just trying to figure
> out how to provide a reasonable description of that to applications
> while not just providing a big 'muxless/muxed' switch, which seems
> restricted to precisely how the hardware that we have works today.
>
>> In mux configuration, you switch the mux between GPUs when the master
>> is switched.
>
> Right, the mux just rewires things so that the other GPU is hooked up to
> the LVDS. I'd expect the LVDS outputs to reflect a suitable connection
> status for these changes.
>
> The only question is how you drive the mux switch. Is this switch
> selectable per-output? Or is is global? And, how do we label which
> outputs are affected by a global switch?

We don't really know with 100% certainty since the specs for all these
things are closed. We've done a lot of RE work, and it mostly appears
to be a single global switch that turns any connected outputs. There is
a table in the intel bios which can tell you about which outputs are muxed
etc, but this isn't always present. Again we also have laptops that have
a mux but don't expose this table, as they only have the MUX so the
BIOS can pick IGP/discrete for Vista, and Windows 7 operates in
muxless mode.

The current problem is I'm not sure any OS exposes muxless and mux
in one OS. Mac OSX always uses muxed, Vista the same, and I think
Windows 7 always exposes muxless if the bios reports optimus support
or the AMD equivalent.

>> In muxless, you go from have Intel connected to the LVDS as the
>> master, to the nvidia being the master with the intel being and output
>> slave driving the LVDS.
>
> Right, the 'master' bit *also* selects which GPU applications will end
> up using by default. And, in this mode, really the only thing the
> 'master' bit is doing is select which rendering engine new applications
> will end up talking to. Presumably, applications could continue to use
> the Intel GPU as a rendering slave somehow? It's pretty easy to imagine
> wanting to use the on-board graphics for rendering video content while
> the nVidia GPU is busy with OpenGL.

That would again be a xinerama style situation though a lot more extreme
and this isn't a use case I have expressed any interest in supporting. I've
explicitly limited the use cases for this to avoid any sink holes. Like
there are loads of what-if scenarios here, but I'm only interested in getting
Linux to the level that Windows has been for years, and any diversions
are just pushing the timeframe for that out.

> So, much like the current RandR 'primary' output, we'll still need a
> provider which is marked as being that which a normal GL application
> will target for rasterizing.

Again a corner case that you might thing is useful but not sure I've
seen any indication of how we'd ever support or expose this from a desktop
env.

>
>> Because we want to expose GPU level properties on the provider, also
>> currently the offload slave isn't a rendering slave per-say, there are
>> no circumstances where want to use the IGP GPU as an offload slave,
>> its simply not a configuration we should be exposing, and I see no
>> reason to advertise it.
>
> Not for GL, no. But, for video, yes. Especially as the IGP will have an
> overlay that the IGP video rasterizer can target for zero-copy operation...

Again most people want to use the nvidia for video, vdpau kicks the ass
of other things, at least under MacOSX they just switch off the intel
when nvidia is running, and under Windows they don't expose it anymore
to the OS. I'm not sure I want to start adding use cases they nobody
has explored, it will just lead into ratholes.

>> I don't think it would represent reality though, and I'd like to keep
>> a provider representing a whole GPU.
>
> Right, a GPU can have two pieces -- rasterizer and scanout engine, and
> you can enable them separately.

I'm staying adamant that one provider is one GPU, however I could accept that
splitting the master role into two roles and allowing GPUs to have
multiple roles
at once might make sense.

>
>> Not sure what you mean conflicting outputs, you mean mux'ed ones where
>> we'd have an LVDS on each GPU? or do you mean crazy
>> connectors where EDID is wired to the intel, but data lines go to the
>> nvidia.
>
> Right, any connector/monitor which can be switched between two OUTPUTs
> would want to be marked as conflicting between the two providers so that
> only one of them would be enabled at a time by applications.

Again we've currently got no idea how to do this, no manufacturer support
or instructions. So I'm weary of introducing protocol for something
I've no indications will
ever be useful.

>
>> Well the thing is, if someone rips the cable out, stuff goes away you
>> don't get to order it.
>
> That's the other order, which of course will 'work'. No, the case I'm
> thinking of is when software wants to switch the mode of an output
> provider, it will need to first disable all of the CRTCs and OUTPUTs on
> that provider.
>
> Here's a summary of how I think things might work:
>
>         OUTPUTCONFIG
>                 output: OUTPUT
>                                 The ID of the output from the global
>                                 list for this screen
>                 conflict: LISTofOUTPUT
>                                 These outputs will be disabled if this
>                                 provider has isActiveOutput set.
>
>         PROVIDER
>                 crtcs: LISTofCRTC
>                 outputs: LISTofOUTPUTCONFIG
>                                 List of output resources controlled by
>                                 this provider
>                 hasGPU
>                                 Whether this provider also has a
>                                 rendering engine.
>                 isPrimaryGPU
>                                 Whether this provider is the default
>                                 rendering engine for new applications
>                 isActiveOutput
>                                 Whether all outputs for this provider
>                                 are enabled while all conflicting
>                                 outputs are disabled.
>
>         Now, you get functions like:
>
>         SetProviderPrimaryGPU(pScreen, provider, isPrimaryGPU)
>
>                 Makes the X server report this GPU as the default
>                 rendering device. Applications not selecting an
>                 explicit renderer will use this one.
>
>         SetProviderActiveOutput(pScreen, provider, isActiveOutput)
>
>                 All outputs associated with this provider become
>                 'active'. This disables any outputs which conflict
>                 with the outputs for this provider.
>
> If we want to be able to individually control 'isActiveOutput' on a
> per-monitor basis, we'd move that, but then we'd have more information
> to describe systems where all of the outputs are controlled by a single
> switch. I don't know how things 'usually' work though

there is no "usually". but Apple hw is always muxed, PC hardware
is tending towards always being muxless. Not sure you can find a muxed
PC new anymore.

>
> To map back to your model:
>
>  * An output-only device (display link) would have hasGPU clear.
>
>  * A GPU with no outputs (muxless external GPU) would have no crtcs or
>    outputs.
>
>  * A 'master' device would have isPrimaryGPU and isActiveOutput both
>    set.
>
>  * An 'output slave' device would have isPrimaryGPU clear and
>    isActiveOutput set
>
>  * A rendering off-load engine would have neither set.
>
>  * Setting 'isPrimaryGPU' and clearing 'isActiveOutput' would also work,
>    allowing for a muxless system to GPU-offload video and use the
>    overlays while using the external GPU for GL content.

I think I like this up until the last one,

I'm still totally unsure of the last use case, what its for.

The thing is if you just have the laptop panel connected to the IGP,
then to save power you only want to power up the discrete GPU for
specific GL rendering tasks, i.e. not desktop compositing but games.
In that case you'd just use the IGP overlay fine.

If you do a GPU switch to use the laptop panel IGP as a slave
to the discrete GPU, you can just as easily use the discrete video engine,
esp since its most likely connected to HDMI or DP, and
since you are most likely docked and plugged in, or don't care about
power.

You seem to be advocating for some 3rd scenario, where the IGP
is slaved to the discrete GPU, so it controls rendering/compositing,
but we can expose the overlays on the IGP somehow, my thinking on this
is probably wait for wayland, since I've no idea how we'd even do that on
X now, or maybe I've totally missed the scenario.

Dave.


More information about the xorg-devel mailing list