[RFC] DeepColor Visual Class Extension
Alex Goins
agoins at nvidia.com
Thu Oct 19 02:46:46 UTC 2017
On Sun, 15 Oct 2017, Keith Packard wrote:
> Alex Goins <agoins at nvidia.com> writes:
>
> > Thanks, Adam.
> >
> > Here's an updated version of the spec:
>
> This is looking very good. I don't have any architectural concerns at
> this point, just some editorial comments.
Thanks, that's good to hear.
> > Rendering to DeepColor windows using the core protocol, however, is loosely
> > defined.
>
> It seems to be actually fairly well defined to me. If core pixel values
> could be 'round-tripped' through the deep storage, then core rendering
> would be exact. That seems pretty simple with the UINT* formats when
> used with a non-linear colorspace -- just use an identity mapping.
Core rendering would be exact relative to core operations e.g. GetImage, but
relative to the HDR format it would be more loosely defined. Probably just a
matter of wording.
Using the identity mapping for UINT* formats would work, as long as we don't
care about an accurate representation of the SDR content when interpreted by HDR
consumers. With a more accurate transfer function, SDR content could look
correct even for HDR consumers. The spec leaves that up to the server, however,
hence it being "loosely defined."
> For a linear color space using FP format, I wonder if you could define a
> function that would result in reliably transferring 256 levels in and
> out of each primary?
I don't see why not, but the same points about HDR consumers apply.
> The principle reason I ask is that if pixel values can be reliably
> stored in deep visuals, then the existing testing infrastructure can be
> used to validate the implementation.
Good point. The only part that really matters there is that there is a fixed 1:1
mapping between the SDR pixel values and a set of HDR pixel values. As to
whether the pixel values correspond to similar shades in both formats shouldn't
matter to core operations, but would to the HDR display.
The point about the 1:1 mapping between pixel values isn't very explicit in the
current wording of the spec, but could be included as an explicit contraint on
the server's transfer function.
I could see that complicating things slightly for implementations that want to
preserve accurate color both when displayed as HDR and when interpreted as SDR,
but it shouldn't be unachievable, especially since the former case would only be
about looking correct, rather than perfect accuracy.
> > scRGB_Linear describes an scRGB color space with linear OETF. scRGB uses the
> > same primaries and white point as sRGB, and the linear encoding is best used
> > with an FP16 pixel format.
>
> I think you should define "OETF" in the spec; most people reading this
> will not know where to start looking for a definition.
Yes, I can do that.
> > COLORSPACE { type: COLORSPACETYPE
> > gamma: FIXED }
>
> Could you just use a FLOAT here? glx uses IEEE floats all over its
> protocol, so it wouldn't be a new type on the wire. I'm a bit sorry to
> not have used floats for Render; that was done when many smaller
> embedded systems still lacked floating point hardware and we were
> concerned about the performance implications of floats in the rendering
> path. For this extension, there's no performance impact here, and using
> a real float would be better. Heck, if you like, use a 64-bit IEEE float.
Yeah, I chose FIXED because I was using Render as a reference and wasn't aware
of the possibility of using floats in protocol. I agree that that is better.
> > DPCSelectInput
> >
> > window: WINDOW
> > enable: SETofDPCSELECTMASK
> >
> > Errors: Window, Value
>
> Either this request should always deliver an event or never deliver an
> event. Having it 'sometimes' deliver events seems messy. I'd suggest
> just having it always deliver one of each of the events selected; it's
> easy to implement and, as you say, avoids race conditions when the
> developer combines this with the query in the wrong order. That's
> actually a nice cleanup compared with similar functionality in existing
> functions.
Fair enough.
> > A composite manager must use a color space/encoding supported by the
> > display(s) when compositing into the target window.
>
> What error is returned if this isn't done? And wouldn't a regular
> application, running in non-composited mode, also have the same
> requirement? After all, a compositing manager is just a regular client
> drawing to a regular window.
The wording should probably be changed to "should." As written, there is no
error if a composite manager chooses something not supported by the display,
just no expectation that the results will be displayed accurately. The
capabilities of the display aren't allowed to change, only the preferences (the
expectation being that the server will convert internally anything that doesn't
match perfectly to the mode on a given display), so there's no excuse for a
compositor to choose an option that the display doesn't support.
An application, running in non-composited mode, does have the same requirement.
However, in this case, the server is responsible for "compositing." The
"compositor" properties don't apply only to composite managers, they also apply
to the server's capabilities when no compositor is running.
The reason for separating "compositor" and "display" capabilities is to solve
the problem of a composite manager overriding the "compositor" capabilities and
then having no visibility into what the server supports or prefers.
If an application is not a composite manager, i.e. does not redirect the root
window hierarchy and does not override compositor capabilities, it would look at
the compositor capabilities regardless of whether or not a composite manager is
running. An application shouldn't have to care if a composite manager is running
or not, it just checks the "compositor" capabilities with the understanding that
they are subject to change. Whether that's because the in-server
compositor/composite manager changed capabilities/preferences, or because a
composite manager started where there wasn't one before, is irrelevant to the
application.
> > DPCOverrideCompositorCapabilities
>
> > The set of outputs represented in 'overrides' must be complete, and the set
> > of color spaces/encodings associated with each of them must be identical or
> > the capabilities will be cleared instead of updated to the new set, still
> > generating a DPCCompositorChangeNotify event.
>
> I think this means the compositing manager is required to emit one of
> these requests for each output? If so, why not simply place all of those
> in a single request so that we can verify that the compositing manager
> did the right thing?
The scores associated with the color spaces could vary between outputs (say, if
the outputs are driving different HDR modes and the compositor wants to indicate
to applications that a certain output prefers a certain input, since
applications don't pay attention to the display capabilities), and representing
that in one request would require a variable length list of variable length
lists, screwing up the encoding.
I actually just noticed that there is an artifact from when I originally wrote
this request as you described, the last paragraph referencing 'overrides', which
was originally a list of capabilities for each output. I must have missed that.
> What happens when another output is added to the screen?
Good question. I suppose that the composite manager (if applicable, otherwise
it's all handled by the server anyway), would have to listen for
RROutputChangeNotify and then use DPCOverrideCompositorCapabilities on that
display, but that would still result in a transient state in between.
Maybe the server could initialize the capabilities to a list of colorspaces
identical to the other outputs (since they are required to be identical other
than score), with all of the scores set to 0 until the composite manager
overrides it.
In any case the new output would be expected to have the same capabilities of
the existing outputs, based on existing constraints. The capabilities, after
all, have more to do with what the composite manager understands as input than
anything specific to particular outputs. The reasoning for different
capabilities per output is simply so that an application can intelligently pick
an option based on the preferences of each output and the current location of
the window.
> > For applications, this must be an option supported by the compositor, and
> > must be updated in response to DPCCompositorChangeNotify events. Rendering
> > using options not supported by the compositor will result in undefined
> > graphical behavior.
>
> I'd really rather this be 'should' and not 'must'. Given that we have
> reasonable well-defined behaviour for conversion to core pixel values,
> any colorspace not directly supported by the compositing manager can be
> supported using core/render operations, leaving clients still functional
> even with a mismatch between their colorspace and the compositing
> manager colorspace.
Given prior discussion, I agree. "Must" should probably be something enforced
by the protocol, as this is not. Applications "should" use options supported by
the compositor just as composite managers "should" use options supported by the
display, since there's no expectation of accuracy otherwise.
Since color space/encoding as this spec is concerned is orthogonal to actual
storage format, choosing incorrectly should never be catastrophic, just probably
won't look right (albeit probably still recognizable).
> In its current form, this places a higher burden on applications than
> compositing managers here -- they *must* support whatever colorspace the
> compositing manager offers, while the compositing manager is free to
> support only those it likes.
Well, it's kind of a given that the producer must produce output that the
consumer can take as input, in all cases, with the server down stream from the
composite manager (if applicable) and the composite manager down stream from the
applications.
I agree that the requirement should be loosened from a protocol perspective, all
of this should just be about helping each stage of the pipeline negotiate a
desirable configuration with the later stage of the pipeline, and indicating
their configuration so their output can be consumed accuracy.
The exception is pixel format, which has to be explicit, but the lack of
flexibility there means it isn't so complex.
> > Composite managers are expected to use this request to indicate to the
> > server which color space is being used for rendering into the target window.
> > In this case, the option must be supported by the display. Rendering using
> > options not supported by the display will result in undefined graphical
> > behavior.
>
> I'm not happy with 'undefined' behaviour here. We've got three moving
> pieces here (display, compositor, app) and things only work right when
> they all follow the rules, and those rules are subject to arbitrary
> change as you start/stop compositing managers and connect/disconnect
> displays.
>
> I think a simple requirement is that the application be allowed to
> choose any colorspace it likes and that, at worst, the image presented
> on the screen will have been restricted to the associated core visual
> precision. That's well defined by the spec and will mean that
> incompatibilities between applications, compositing manager and display
> will only result in lower-fidelity images, not 'undefined' results.
That is something to consider, but how would you do that? Restricting to SDR is
easy when we're only talking about core rendering being read by core operations,
but what about output produced by HDR graphics APIs?
How does the server know how to restrict the output of the composite manager to
SDR if it doesn't understand the format the output is in? The same issue occurs
between the compositor and the application.
Later stages in the pipeline have no way to force earlier stages to put a
certain format of pixels into their buffers, they can only make it clear which
formats are supported.
They could make a best effort, but that's really no better than "undefined."
> > 9. Issues
> >
> > This spec does not address the suggestion that window color space/encoding
> > should reflect that of the next frame. It is difficult to determine what the
> > "next frame" is without the Present extension, and a concrete solution has yet
> > to be found.
> >
> > * Perhaps this functionality could be the domain of an interaction between
> > the Present extension and DeepColor-aware clients, where clients hand off
> > the responsibility for finalizing the color space/encoding of a window to
> > the Present extension, which would atomically update it with the
> > presentation of the next frame before generating
> > DPCWindowChangeNotify.
>
> Yup. Should work fine. This should be done using a separate request so
> that applications can still set the colorspace without using
> Present. Essentially, the application would do
>
> SetNextPresentColorspace
> PresentPixmap
>
> and the colorspace seen by the compositor when it receives the
> associated damage will be the new one. Hrm. It will receive that in an
> event, presumably directly before the Damage event. The compositor would
> be wise to look for a damage event when it receives the new colorspace
> information, or perhaps it could infer that only future contents as
> indicated by Damage events should be interpreted in the new colorspace?
That makes sense.
Seems it could be handled either way, the understanding just being that all
future damage after receiving a DPCWindowChangeNotify event would be in the new
color space. Not a bad convention to go by even without use of the Present
extension, it just wouldn't be guaranteed without it.
Thanks,
Alex
>
> --
> -keith
>
More information about the xorg-devel
mailing list