[PATCH] randr: add provider object (v7)
Keith Packard
keithp at keithp.com
Mon Jul 2 14:03:04 PDT 2012
Dave Airlie <airlied at gmail.com> writes:
> On Mon, Jul 2, 2012 at 8:54 PM, Keith Packard <keithp at keithp.com> wrote:
>> Dave Airlie <airlied at gmail.com> writes:
>>
>>
>>> Yes thats going to be the standard on switchable GPU machines, two masters
>>> and the ability to jump between them. In that case max master is one, and you'd
>>> have to set the provider roles. If maxmaster > 1 then xinerama
>>> emulation is available.
>>
>> In those machines, isn't the limitation that only one of them can drive
>> the LVDS panel at a time? Can't you have the internal GPU driving the
>> LVDS while the external GPU drives other outputs?
>
> Yes but you don't configure it in xinerama mode for that, there are
> mux and muxless configurations.
I think I understand what the hardware does now, just trying to figure
out how to provide a reasonable description of that to applications
while not just providing a big 'muxless/muxed' switch, which seems
restricted to precisely how the hardware that we have works today.
> In mux configuration, you switch the mux between GPUs when the master
> is switched.
Right, the mux just rewires things so that the other GPU is hooked up to
the LVDS. I'd expect the LVDS outputs to reflect a suitable connection
status for these changes.
The only question is how you drive the mux switch. Is this switch
selectable per-output? Or is is global? And, how do we label which
outputs are affected by a global switch?
> In muxless, you go from have Intel connected to the LVDS as the
> master, to the nvidia being the master with the intel being and output
> slave driving the LVDS.
Right, the 'master' bit *also* selects which GPU applications will end
up using by default. And, in this mode, really the only thing the
'master' bit is doing is select which rendering engine new applications
will end up talking to. Presumably, applications could continue to use
the Intel GPU as a rendering slave somehow? It's pretty easy to imagine
wanting to use the on-board graphics for rendering video content while
the nVidia GPU is busy with OpenGL.
So, much like the current RandR 'primary' output, we'll still need a
provider which is marked as being that which a normal GL application
will target for rasterizing.
> Because we want to expose GPU level properties on the provider, also
> currently the offload slave isn't a rendering slave per-say, there are
> no circumstances where want to use the IGP GPU as an offload slave,
> its simply not a configuration we should be exposing, and I see no
> reason to advertise it.
Not for GL, no. But, for video, yes. Especially as the IGP will have an
overlay that the IGP video rasterizer can target for zero-copy operation...
> I don't think it would represent reality though, and I'd like to keep
> a provider representing a whole GPU.
Right, a GPU can have two pieces -- rasterizer and scanout engine, and
you can enable them separately.
> Not sure what you mean conflicting outputs, you mean mux'ed ones where
> we'd have an LVDS on each GPU? or do you mean crazy
> connectors where EDID is wired to the intel, but data lines go to the
> nvidia.
Right, any connector/monitor which can be switched between two OUTPUTs
would want to be marked as conflicting between the two providers so that
only one of them would be enabled at a time by applications.
> Well the thing is, if someone rips the cable out, stuff goes away you
> don't get to order it.
That's the other order, which of course will 'work'. No, the case I'm
thinking of is when software wants to switch the mode of an output
provider, it will need to first disable all of the CRTCs and OUTPUTs on
that provider.
Here's a summary of how I think things might work:
OUTPUTCONFIG
output: OUTPUT
The ID of the output from the global
list for this screen
conflict: LISTofOUTPUT
These outputs will be disabled if this
provider has isActiveOutput set.
PROVIDER
crtcs: LISTofCRTC
outputs: LISTofOUTPUTCONFIG
List of output resources controlled by
this provider
hasGPU
Whether this provider also has a
rendering engine.
isPrimaryGPU
Whether this provider is the default
rendering engine for new applications
isActiveOutput
Whether all outputs for this provider
are enabled while all conflicting
outputs are disabled.
Now, you get functions like:
SetProviderPrimaryGPU(pScreen, provider, isPrimaryGPU)
Makes the X server report this GPU as the default
rendering device. Applications not selecting an
explicit renderer will use this one.
SetProviderActiveOutput(pScreen, provider, isActiveOutput)
All outputs associated with this provider become
'active'. This disables any outputs which conflict
with the outputs for this provider.
If we want to be able to individually control 'isActiveOutput' on a
per-monitor basis, we'd move that, but then we'd have more information
to describe systems where all of the outputs are controlled by a single
switch. I don't know how things 'usually' work though...
To map back to your model:
* An output-only device (display link) would have hasGPU clear.
* A GPU with no outputs (muxless external GPU) would have no crtcs or
outputs.
* A 'master' device would have isPrimaryGPU and isActiveOutput both
set.
* An 'output slave' device would have isPrimaryGPU clear and
isActiveOutput set
* A rendering off-load engine would have neither set.
* Setting 'isPrimaryGPU' and clearing 'isActiveOutput' would also work,
allowing for a muxless system to GPU-offload video and use the
overlays while using the external GPU for GL content.
--
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20120702/4a41ebea/attachment.pgp>
More information about the xorg-devel
mailing list