[PATCH] randr: add provider object (v7)
Dave Airlie
airlied at gmail.com
Tue Jul 3 12:04:43 PDT 2012
On Tue, Jul 3, 2012 at 7:01 PM, Keith Packard <keithp at keithp.com> wrote:
> Dave Airlie <airlied at gmail.com> writes:
>
>> The current problem is I'm not sure any OS exposes muxless and mux
>> in one OS. Mac OSX always uses muxed, Vista the same, and I think
>> Windows 7 always exposes muxless if the bios reports optimus support
>> or the AMD equivalent.
>
> So we pick the simplest mux option that supports known Apple hardware,
> which appears to use a global mux, at least as far as we can tell.
>
>> That would again be a xinerama style situation though a lot more extreme
>> and this isn't a use case I have expressed any interest in supporting. I've
>> explicitly limited the use cases for this to avoid any sink holes. Like
>> there are loads of what-if scenarios here, but I'm only interested in getting
>> Linux to the level that Windows has been for years, and any diversions
>> are just pushing the timeframe for that out.
>
> Sounds good. One GPU at a time is certainly easier to manage
> today.
>
>> Again a corner case that you might thing is useful but not sure I've
>> seen any indication of how we'd ever support or expose this from a desktop
>> env.
>
> Oh, I can imagine it -- DRI2 could provide a list of GPUs, the first of
> which was the 'preferred' one. Something for a later adventure though.
Oh we have GL_AMD_gpu_association to deal with at some point. That pretty much
lets you list all the attached GL GPUs and create a context on a specific one.
> I thought you were already providing this in the form of an off-load GPU
> though. Are you saying that when you enable an off-load GPU, you disable
> the internal GPU?
I only enable the offload GPUs as the discrete devices, since they have the
hw to move data to the IGP at a decent pace. Now I'm not sure I see any use
in switching to running the nvidia as the primary and having the intel
be the offload
GPU, in that case once we have GPU switching, I'd just use the Intel
as an output
slave for the IGP outputs.
>
> Right, what I was suggesting was that a 'master' provider is actually
> just the union of an 'output slave' and a 'renderer'. You're already
Yeah I could probably then add a renderer option alright, though we don't want
certain things being outputs slaves, like the nvidia can be a renderer
with outputs
but I can see no reason we'd ever want it to act like an output slave, because
it would mean we are rendering on the IGP can then copying stuff into nvidia's
VRAM which the design of optimus isn't really optimized for, however I could
be persuaded to change my mind on this, again I just see it as protecting the
user from doing something they shouldn't.
>
>> Again we've currently got no idea how to do this, no manufacturer support
>> or instructions. So I'm weary of introducing protocol for something
>> I've no indications will ever be useful.
>
> Are you saying that when you throw the mux switch that *all* of the
> outputs from the disabled output slave are turned off? That's certainly
> simpler than allowing some outputs to continue operating while others
> are disabled. I'm pretty sure there are laptops where the external
> monitor ports are connected only to the external GPU while the internal
> panel can be switched, and I was trying to find a way to describe that
> to clients so they would know which outputs were going to be disabled
> when the mux was thrown.
No the MUX is attached to 2 out of 4 outputs in some cases, all 4 in
other cases.
Generally I've seen LVDS + VGA be muxed, but other cases have all the
outputs MUXed.
Again thats if you can find the magic table and its true. I've no idea
how thunderbolt is
dealt with wrt multi-gpus. On my laptop also the docking stations
outputs are only
wired to the nvidia, except for some reason their EDID lines go to
intel as well, confusing.
I'm not really sure how we can expose this to clients, I think though
it could be done as
an addendum to this, a new GetOutputConfiguration or something, and as such I'd
rather specify it once I have a sample set of > 1 piece of hw ;-)
>>>
>>> * An output-only device (display link) would have hasGPU clear.
>>>
>>> * A GPU with no outputs (muxless external GPU) would have no crtcs or
>>> outputs.
>>>
>>> * A 'master' device would have isPrimaryGPU and isActiveOutput both
>>> set.
>>>
>>> * An 'output slave' device would have isPrimaryGPU clear and
>>> isActiveOutput set
>>>
>>> * A rendering off-load engine would have neither set.
>>>
>>> * Setting 'isPrimaryGPU' and clearing 'isActiveOutput' would also work,
>>> allowing for a muxless system to GPU-offload video and use the
>>> overlays while using the external GPU for GL content.
>>
>> I think I like this up until the last one,
>
> Cool. Just an attempt to try and describe things in a slightly more
> orthogonal fashion.
>
>> I'm still totally unsure of the last use case, what its for.
>
> Are you saying it won't work, or that people won't use it?
I can't see what use-case would be for it, displaying video on the Intel
while rendering the desktop on the nvidia isn't going to save you much power,
you'd probably just turn off the nvidia in that case., while you are watching
your video.
>> You seem to be advocating for some 3rd scenario, where the IGP
>> is slaved to the discrete GPU, so it controls rendering/compositing,
>> but we can expose the overlays on the IGP somehow, my thinking on this
>> is probably wait for wayland, since I've no idea how we'd even do that on
>> X now, or maybe I've totally missed the scenario.
>
> Ignoring the overlays, which do seem complicated, aren't we already
> going to be in this situation wrt the GPU? Is the compositing manager
> going to restart and switch to the discrete GPU when we make that the
> default for new applications?
Yes when we do GPU switch (1.14) I'll have to implement
GLX_ARB_create_context_robustness,
so I can tell the GL compositors that they've lost context and need to
restart. When we GPU
switch all apps jump to the new GPU, that code isn't ready yet, but
I've got a preview
http://cgit.freedesktop.org/~airlied/xserver/log/?h=gpu-switch-101
based on all this.
Essentially in that case we track all gc/pixmap/pictures, abstract the
protocol/gpu screen further,
and do an impedance layer. It's worked before, gnome-shell crashes
when DRI2 closes all its connections,
then restarts on the second GPU :-) yes robustness is TODO.
Dave.
> --
> keith.packard at intel.com
More information about the xorg-devel
mailing list