DRI2 Protocol Spec Draft
krh at bitplanet.net
Mon Sep 15 09:11:45 PDT 2008
On Fri, Sep 12, 2008 at 12:20 PM, Michel Dänzer
<michel at tungstengraphics.com> wrote:
> On Thu, 2008-09-11 at 16:02 -0400, Kristian Høgsberg wrote:
>> On Thu, Sep 11, 2008 at 3:49 PM, Jesse Barnes <jbarnes at virtuousgeek.org> wrote:
>> > On Thursday, September 11, 2008 8:59 am Kristian Høgsberg wrote:
>> >> On Thu, Sep 11, 2008 at 6:40 AM, Michel Dänzer
>> >> <michel at tungstengraphics.com> wrote:
>> >> > On Wed, 2008-09-10 at 14:09 -0400, Kristian Høgsberg wrote:
>> >> >> On Tue, Sep 9, 2008 at 11:46 AM, Keith Packard <keithp at keithp.com> wrote:
>> >> >> > On Tue, 2008-09-09 at 13:27 +0200, Michel Dänzer wrote:
>> >> >> >> For DRI2CopyRegion, you're leaving it to the DDX driver to pick the
>> >> >> >> CRTC to synchronize to? I'm not sure that'll work too well with
>> >> >> >> overlapping viewports, where the user may want to choose which CRTC
>> >> >> >> to synchronize to for each application.
>> >> >> >
>> >> >> > Yeah, I don't see a good way to avoid this, and the client can always
>> >> >> > pass in 'Automatic' (0) and let the server pick the 'right' one.
>> >> >>
>> >> >> Do we need this? When will the client have a better idea of which
>> >> >> pipe a window is on than the X server?
>> >> >
>> >> > Whenever a window is visible on multiple CRTCs (in particular, think
>> >> > fullscreen video/presentation on laptop panel and beamer), only the user
>> >> > can know which CRTC should be synchronized to.
>> >> I don't know that we need to make it this complicated... there is no
>> >> right choice when your window spans two CRTCs. Either you have fancy
>> >> gen-locked hardware in which case it doesn't matter, or your CRTCs
>> >> scan out at different refresh rates, in which case you can't win. If
>> >> the window is completely within one or the other CRTC, the X server
>> >> can pick the right CRTC.
>> > In the sense that you'd get tearing on at least one output that's true.
>> > However, in Michel's example, the user would probably want to sync to the
>> > beamer not the laptop display, if they were doing a multimedia presentation.
>> > So in some cases there definitely is a "right choice". It may be sufficient
>> > to sync everything to one output though, rather than do things on a
>> > per-client basis; I'm not sure if one is easier than the other, design-wise.
>> Ok, I guess what I want to say is: until we have a good story on how
>> the app is going to figure out which display it wants to sync to, tell
>> this to GLX, I'd like to hold off on putting it in the protocol.
> The app isn't required to be actively involved, e.g. there could be a
> driconf option.
If the only way to control this policy is through config files, it
might as well become an xorg.conf option. Or a randr12 property on
the CRTC. Similarly for how to handle missed swap targets - that
could be an xorg.conf option too.
> The second draft says about DRI2AbsoluteSync:
> The client is expected to query the kernel rendering manager for
> the current frame count in order to compute the desired target
> But that isn't possible with the proposed interface and the DRM
> interfaces, which expose independent per CRTC sequence numbers.
> (Ignoring that the target sequence number needs to be calculated from
> the effective sequence number of the previous swap, not from whatever
> sequence number happens to be current when calling DRI2CopyRegion)
>> We can extend the CopyRegion request easily, so lets add the pipe
>> attribute when we have a way of getting that info from the app.
> Given issues like the above, it may indeed be better to only add any
> vsync functionality once the relevant use cases have at least been
> prototyped throughout the stack.
> Will it be possible to add reply values to DRI2CopyRegion though?
DRI2CopyRegion is a one-way request, but to do a complete swap buffer
sequence, you need a server round trip to make sure the X server has
seen the (one or more) DRI2CopyRegion requests before the client
starts rendering the next frame. Which is why you need to call
DRI2GetBuffers after submitting the DRI2CopyRegion requests. And
while I don't think you'll need to send more than one DRI2CopyRegion
per frame (and you can just use Xfixes to union the regions if you
really have several regions to copy for a frame), it's just as much an
implementation detail. After scheduling a copy, you need to ask for
the new set of buffers in case the server implements page flip, so
lets just use DRI2GetBuffers there - it already exists and does the
round trip that will ensure the X servers sees the DRI2CopyRegion
Writing all this I'm thinking that it might be simpler just to make
DRI2CopyRegion a roundtrip. Requiring the DRI2 client to call
DRI2GetBuffers after sending DRI2CopyRegion seems a bit flaky, and
implementation-wise we can share the marshalling/demarshalling code
for the DRI2Buffers. That will let us return the frame number it got
scheduled for (and the X server knows, because it knows which CRTC it
scheduled the swap on for the drawable). Client that want to specify
an absolute frame number will have to submit the first DRI2CopyRegion
using a relative frame number, but after that they can compute the
exact number themselves. Doing it this way also lets us optimize the
buffers we return - only the source and the destination buffers for
DRI2CopyRegion will change in case the X server chooses to do page
flipping, so we only need to send those back. And then the DRI2
client code in the loaders (libGL and AIGLX) doesn't need to know the
full set of buffers the DRI client uses, just source and destination,
which gets rid of an annoying implementation nit I was mulling over.
Alright, let me update the spec one more time and try to get the
implementation in line. I should be able to push this out to ~krh
More information about the xorg