[PATCH] Revert "Set DamageSetReportAfterOp to true for the damage extension" (#30260)
Aaron Plattner
aplattner at nvidia.com
Wed Oct 20 14:29:33 PDT 2010
On Wed, Oct 20, 2010 at 01:54:35PM -0700, Keith Packard wrote:
> On Wed, 20 Oct 2010 11:35:46 -0700, Aaron Plattner <aplattner at nvidia.com> wrote:
>
> > By sending X protocol to explicitly fence the GL rendering with the X
> > rendering, which is the whole purpose of James's XDamageSubtractAndTrigger
> > request. Otherwise, the driver has to implicitly fence all rendering (and
> > probably all CUDA kernels that interoperate with X) for damage events,
> > which will not perform well.
>
> The rendering we're interested in is coming from an X pixmap -- the
> backing pixmap for a redirected window. That you're using that with GL
> is immaterial; you must talk about the X resource somewhere. Yes, it's
> no longer on the X protocol connection, but as an X API, it seems
> incumbent on the implementation to retain the basic X ordering
> guarantees.
It's GLX_EXT_texture_from_pixmap, which has no such ordering guarantee. It
was intentionally left out of the specification so that we would have time
to work through the details of a proper synchronization extension.
GLX also makes no such guarantees for X drawables that are made current.
This is what glXWaitX is for, except that that's a very heavy hammer.
> The Intel DRI driver has specific fencing operations that pend
> operations from one client wrt another client to ensure correct ordering
> in these cases.
Then you're implementing what sounds like a nice vendor-specific feature.
> > In the context of the Damage extension, "future rendering requests" means
> > future *X* rendering requests -- this is how you told me it worked when I
> > asked you about it. Implicitly fencing all other rendering through any API
> > when a Damage event is generated is a spec change that we could make, but
> > it would require discussion.
>
> I don't see it as a change in the specification, just a natural
> consequence of the X ordering guarantees, and as we're dealing only with
> X objects here, I'm not sure why the API used to reference those objects
> is relevant.
>
> > If your driver model can use the ReportAfter semantics to get implicit
> > fencing across APIs, then that seems like something you could enable in the
> > driver via a DamageCreate or DamageRegister hook. It doesn't seem like it
> > belongs in the server.
>
> I guess the point of the change was that the change ensures that the
> driver *can* serialize access to the hardware correctly, without the
> need for changes in applications. If there is a more efficient
> mechanism, I'd love to hear suggestions.
The driver can do that by enabling ReportAfter from a damage create hook.
The reason I care is that with a proper synchronization extension, clients
that use it actually benefit from receiving Damage events as early as
possible because they can get started processing their rendering without
having to wait even for X to submit the rendering to the GPU, secure in the
knowledge that the GPU rendering won't actually occur until the X rendering
has landed.
Forcing the compositor to wait until the X driver has processed the
request, and especially (at least on our hardware) until the commands have
been flushed to the GPU reduces the amount of parallelism between the CPU
and the GPU.
> > The flush callback sounds nifty, but it won't solve the problem for
> > multi-queue hardware.
>
> Right, it seems like what you want is to know which drawables have
> delivered damage events so that you can correctly serialize rendering
> From them by other applications using direct rendering.
>
> > The old (and new, with this change) Damage semantics just plain don't work
> > on multi-queue hardware with direct-rendering clients; it's a limitation in
> > the specification that's addressed by James's sync extension.
>
> The patch in questions should have *no effect* except in the rare case
> when the events end up being delivered before the rendering is flushed
> to the hardware. Given that, and given that there is no other existing
> mechanism to prevent this race condition in the server code, I'm
> convinced that we cannot remove the change until another solution is
> proposed.
Yes, I agree that there's some additional bug that needs to be fixed,
regardless. What I'm saying is that this patch should be reverted on
principle anyway and moved into the appropriate drivers (or the DRI
module) instead, because it's a vendor-specific requirement that is not
applicable to all drivers and reduces CPU/GPU parallelism.
> Fixing the problem with the compiz plugin should be our priority at this point.
>
> --
> keith.packard at intel.com
More information about the xorg-devel
mailing list