[Mesa-dev] What use do swap interval > 1 and OML_sync_control divisor and remainder have?

Pekka Paalanen ppaalanen at gmail.com
Tue Jan 28 02:28:53 PST 2014


Hi Ian and Jason

On Mon, 27 Jan 2014 12:26:23 -0700
Ian Romanick <idr at freedesktop.org> wrote:

> On 01/24/2014 04:32 AM, Pekka Paalanen wrote:
> > Hi,
> > 
> > I am investigating what kind of Wayland protocol extensions would be
> > needed to support proper presentation timing. Looking at existing
> > works, I am wondering about two things whether they have any real
> > use.
> > 
> > Where is swap interval (e.g. eglSwapInterval) greater than one
> > useful? Intervals 0 and 1 I understand, and Mesa EGL Wayland
> > already supports those. But when would you like to make your
> > framerate a fraction of the display's?
> 
> There are a number of theoretical uses, but I don't know that we've
> ever seen any in the wild.
> 
> One is video playback.  You likely want 30fps there.

I would hope that no video player will use swap interval as a means of
target timing, because the buffer queueing protocol I'm planning is
supposed to be superior for accurately timed video presentation. The
protocol will also be usable with EGL provided content, if the EGL
implementation can cope with buffers being reserved by the display
server for longer than usual.

> Imagine that you have a game that only needs 30fps to be playable.
> When you're running on battery, you may want the system to throttle
> you to a lower framerate to save battery.
> 
> You could also have a game that can always hit at least 30fps, but
> sometimes it may go higher.  Using a swap interval of 2 gives the the
> game a consistent framerate.  Sometimes that is better.
> 
> 120Hz monitors.

These are good points. I can easily come up with a counter argument
for at least the first and last, how something else would be better than
the application itself setting swap interval. I can even see a
compositor user option "limit this window/application to <a fraction of
the refresh rate> Hz" which is already possible in Wayland without any
protocol changes. Whether that would be a good UI is another question.

After this and the discussion on #xorg-devel, I am now fairly confident,
that I do not have to design for swap interval > 1 support at this
time. If such support really is needed, I see two possibilies already.

- Use the buffer queueing protocol to target presentation at last
  realized presentation time plus two frame periods. EGL internally
  could keep on waiting for the usual frame callback (a Wayland protocol
  feature) like it does now for swap interval = 1. Enabling this
  occurred to me yesterday, and I have it in my buffer queue plans now.

- Add a new request alike wl_surface.frame, which has a parameter of
  how many output refresh cycles should pass since the last
  presentation before this presentation is executed.

> > When are the target-MSC related remainder and divisor parameters as
> > defined in the GLX_OML_sync_control useful? Why does also X11
> > Present protocol include remainder and divisor?
> 
> X11 Present has it to support GLX_OML_sync_control.  I believe that
> GLX_OML_sync_control has it to support playback of content on monitors
> that aren't 60Hz.  There used to be these things called CRTs, and some
> of them had wonkey refresh rates... like 72Hz.

But the divisor and remainder apply only if the original target-MSC is
missed, causing the presentation to be postponed to a later point in
time determined by matching the modulus of MSC. I still don't really
understand when that was useful, or how it was even used.

Are you saying that these were used to pretend that the monitor refresh
rate was something lower than what it really was? Did that really work
better than just presenting the content update at the earliest possible
refresh?


On Fri, 24 Jan 2014 12:27:11 -0600
Jason Ekstrand <jason at jlekstrand.net> wrote:

> On Jan 24, 2014 6:32 AM, "Pekka Paalanen" <ppaalanen at gmail.com> wrote:
> >
> > GLX_OML_sync_control defines that for interlaced displays MSC is
> > incremented for each field. With divisor and remainder you could
> > then target only top or bottom fields. Is that useful, and do we
> > care about interlaced displays anymore?
> 
> I think we do. In particular, we should care about TV set-top boxes.
> Even though most TVs are LCD, DLP, or similar, hdmi does support
> interlacing and it is still used (particularly in HDTV). I have no
> idea what implications this has for a present extension, but I think
> we could still handle it in a sane way without going for MSC.

Right, there was quite some discussion on #xorg-devel about
interlacing. All that lead me to write down the following in my notes:

Supporting interlaced material and displays is punted for a
later extension. Presumably the protocol supporting interlaced
content would be as simple as having an extra wl_surface-like
request to say on which of the two fields the content should be
displayed first. The field designation would be an additional
restriction on when a content update should initially hit the
screen. I.e. if both field and target timestamp are given, both
conditions must pass. This means that giving a field may delay
the presentation by one output refresh cycle, assuming the
output scans out alternating fields. Additionally there should
be an extension to inform the client, which field the top-most
scanline of the buffer will hit, or equivalent information. This
assumes that the even scanlines in a buffer correspond to one
field, and the odd scanlines correspond to the other field,
regardless of how these terms are defined.

I hope that makes sense.


Thanks,
pq

> > I am contemplating on not supporting these, because I am going to
> > propose using an UST-like clock as the "standard clock language" in
> > Wayland present extension. Supporting MSC-based timings would add
> > complexity. Therefore I would like to know where and how the above
> > mentioned are useful, because I cannot imagine it myself.
> > 
> > Please, let me know of real actual use cases and existing software,
> > where these features offer a visible benefit and what that benefit
> > is exactly. I am not interested in what one might do in theory, I am
> > interested in real-world examples where they have proved useful.
> > Well, maybe also theories if they allow taking advantage of some
> > new cool technology.
> > 
> > Btw. if you think that using UST for presentation timing and
> > feedback is nonsense, and MSC is the only right way, let me know
> > and I can start another email thread about that detail after
> > preparing my material.


More information about the xorg-devel mailing list