Current DRI3 specification
Keith Packard
keithp at keithp.com
Fri Jun 7 10:10:00 PDT 2013
James Jones <jajones at nvidia.com> writes:
> Yeah, I think the semantics are compatible. We allocate these buffers
> on the server-side, but I don't think that affects the interaction with
> Present.
DRI2 allocates them server-side as well for GLX compliance, but with the
way Intel hardware does MSAA, it's just not feasible to allow for
multi-process rendering to the same buffers so I've given up any
pretense.
> I've never been fond of the OML triplet because the values don't
> correspond well to the counters/clocks our HW has.
Yeah, I haven't found anyone who likes the OML stuff, but it's the spec
we have...
> However, it was
> always the intent that there would be a bunch of external-event
> triggered types of fences added via other extensions (trigger at a given
> timer value, when a certain scanline is reached, triggered while in the
> vblank region or a certain bracketed set of scanlines, etc.)
I'm not sure how general I want to try and make this. As far as I can
understand it, applications want to display no faster than frame rate,
and tear if they go over time on a frame. Just getting to that will
probably be complicated enough without adding the ability to sync to
other mechanisms.
> Perhaps rather than merge these into sync or present, there could be
> small, separate extensions that introduce various new ways to create a
> fence sync object with these properties.
I think Fence objects are completely unrelated to the display system --
Fence objects provide a way to serialize GPU access to the underlying
render buffers. Trying to stir those up into more general counters that
provide precise times when buffers get displayed seems confusing to me.
Having them as simple Sync counters might make sense, the chief trouble
there is that OML links the three values together, and Sync doesn't
support that notion.
> One such extension could introduce the OML values either as a combined
> fence object, or as 3 separate objects. I had imagined the "present"
> operation would take an arbitrary length ordered list of fence sync
> objects to wait for, which would be passed down to drivers where they
> could be collapsed when possible. For example, if the HW or kernel
> driver supports waiting for the first vblank after a given timer value
> was reached as a single operation, the fence sequence { TIMER, VBLANK
> } could be collapsed into a single HW/kernel wait operation by the
> corresponding X driver.
Do you actually need this, or would it just be 'cool'? I don't have
anyone asking for anything like this, just the simple 'make it pretty'
requirement described above.
--
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 827 bytes
Desc: not available
URL: <http://lists.x.org/archives/xorg-devel/attachments/20130607/535ffb3c/attachment.pgp>
More information about the xorg-devel
mailing list