[PATCH] randr: add provider object (v7)
Keith Packard
keithp at keithp.com
Tue Jul 3 11:01:35 PDT 2012
Dave Airlie <airlied at gmail.com> writes:
> The current problem is I'm not sure any OS exposes muxless and mux
> in one OS. Mac OSX always uses muxed, Vista the same, and I think
> Windows 7 always exposes muxless if the bios reports optimus support
> or the AMD equivalent.
So we pick the simplest mux option that supports known Apple hardware,
which appears to use a global mux, at least as far as we can tell.
> That would again be a xinerama style situation though a lot more extreme
> and this isn't a use case I have expressed any interest in supporting. I've
> explicitly limited the use cases for this to avoid any sink holes. Like
> there are loads of what-if scenarios here, but I'm only interested in getting
> Linux to the level that Windows has been for years, and any diversions
> are just pushing the timeframe for that out.
Sounds good. One GPU at a time is certainly easier to manage
today.
> Again a corner case that you might thing is useful but not sure I've
> seen any indication of how we'd ever support or expose this from a desktop
> env.
Oh, I can imagine it -- DRI2 could provide a list of GPUs, the first of
which was the 'preferred' one. Something for a later adventure though.
> Again most people want to use the nvidia for video, vdpau kicks the ass
> of other things, at least under MacOSX they just switch off the intel
> when nvidia is running, and under Windows they don't expose it anymore
> to the OS. I'm not sure I want to start adding use cases they nobody
> has explored, it will just lead into ratholes.
I thought you were already providing this in the form of an off-load GPU
though. Are you saying that when you enable an off-load GPU, you disable
the internal GPU?
> I'm staying adamant that one provider is one GPU, however I could accept that
> splitting the master role into two roles and allowing GPUs to have
> multiple roles
> at once might make sense.
Right, what I was suggesting was that a 'master' provider is actually
just the union of an 'output slave' and a 'renderer'. You're already
> Again we've currently got no idea how to do this, no manufacturer support
> or instructions. So I'm weary of introducing protocol for something
> I've no indications will ever be useful.
Are you saying that when you throw the mux switch that *all* of the
outputs from the disabled output slave are turned off? That's certainly
simpler than allowing some outputs to continue operating while others
are disabled. I'm pretty sure there are laptops where the external
monitor ports are connected only to the external GPU while the internal
panel can be switched, and I was trying to find a way to describe that
to clients so they would know which outputs were going to be disabled
when the mux was thrown.
> there is no "usually". but Apple hw is always muxed, PC hardware
> is tending towards always being muxless. Not sure you can find a muxed
> PC new anymore.
Presumably cheaper this way...
>
>>
>> To map back to your model:
>>
>> * An output-only device (display link) would have hasGPU clear.
>>
>> * A GPU with no outputs (muxless external GPU) would have no crtcs or
>> outputs.
>>
>> * A 'master' device would have isPrimaryGPU and isActiveOutput both
>> set.
>>
>> * An 'output slave' device would have isPrimaryGPU clear and
>> isActiveOutput set
>>
>> * A rendering off-load engine would have neither set.
>>
>> * Setting 'isPrimaryGPU' and clearing 'isActiveOutput' would also work,
>> allowing for a muxless system to GPU-offload video and use the
>> overlays while using the external GPU for GL content.
>
> I think I like this up until the last one,
Cool. Just an attempt to try and describe things in a slightly more
orthogonal fashion.
> I'm still totally unsure of the last use case, what its for.
Are you saying it won't work, or that people won't use it?
> You seem to be advocating for some 3rd scenario, where the IGP
> is slaved to the discrete GPU, so it controls rendering/compositing,
> but we can expose the overlays on the IGP somehow, my thinking on this
> is probably wait for wayland, since I've no idea how we'd even do that on
> X now, or maybe I've totally missed the scenario.
Ignoring the overlays, which do seem complicated, aren't we already
going to be in this situation wrt the GPU? Is the compositing manager
going to restart and switch to the discrete GPU when we make that the
default for new applications?
--
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20120703/b439f452/attachment.pgp>
More information about the xorg-devel
mailing list