Road map for remaining pixman refactoring

Simon Thum simon.thum at gmx.de
Tue Jun 9 04:09:47 PDT 2009


Soeren Sandmann wrote:
> Simon Thum <simon.thum at gmx.de> writes:
> 
>> Soeren Sandmann wrote:
>>>     - bits_image_fetch_pixels() then calls a low-level pixel fetcher
>>>       whose job it is to convert from whatever the image format is to
>>>       PIXMAN_a8r8g8b8.
>> Just a question: does that imply algos will work on 8-bit data? I'm
>> asking since that would block the easy path to gamma-correct ops which
>> you already explored in the gamma branch.
> 
> No, the existing support for compositing with 16 bit channels will not
> go away, and support for other formats could be added. But currently
> for almost all formats, the 16 bit support works by first fetching 8
> bit channels, then expanding those to 16 bit.
> 
Good to know.

> Yes, I meant brightness ("perceptual luminance").
> 
> I don't think treating alpha as coverage always is really
> correct. Ignoring all gamma issues, suppose someone selects a 50%
> translucent white in cairo, then composites that on top of a black
> background. The resulting pixels will be 50% gray, but more
> importantly *look* 50% gray because of sRGB's being roughly
> perceptually uniform. Which is exactly what the user would expect.
> 
> It is difficult to argue that this outcome is somehow wrong.
Nevertheless, I'll try:

This is what you expect as an artist. Let an Artist (or a normal person) 
choose a 'halfway between white and black', he is likely to mix up a 18% 
color. Studies have found 18.1% to be 'half the brightness'.
http://en.wikipedia.org/wiki/Gray_card

127 in a framebuffer, on a gamma 2.2 screen, gives around 22%. Near to 18%.

So yes, in some circumstances this is close to what you want.

> There are several other cases where alpha really should be treated as
> a brightness modulation: gradients, fading, probably image overlays
> come to mind. Generally, when the alpha value is explicitly given by
> the user, brightness modulation is probably what he had in mind.
In my diploma thesis, I did all this in a linear fashion, and guess 
what, it looked great. Some of your examples have a natural origin: You 
could see shadows (their penumbra, specifically) as a gradient. Catch 
is: Where's the monitor? Or the human? Scaling, compositing, this all 
happens before we percept, so also independent of it. We just percept 
the end result.

My bottom line: Stuff that happens 'outside there' should be 
linear/coverage. Let's look at those nifty shadows compositing (render) 
WMs like to draw. This is clearly coverage; a 50% black over white
should yield 50% on screen (around 190), not 22% (127).

For the more artistic cases directed at perceptual qualities (like some 
gradients, but not the WM shadow gradient!), luma modulation is of 
course desirable. But there are better tools for this purpose, like the 
L*a*b* space. So why go to such great lengths with luma mod?

 > On the other hand, when the alpha value comes from antialiased polygon
 > rasterization, an intensity modulation is clearly desired.
 >
 > Ideally, images would have both a coverage and a translucency channel.
I agree to your assertion buty not your conclusion.

AFAIK pixman does, by proxy, also inkscape's rendering. The svg spec 1.1 
allows to specify in which space composition should happen; sRGB and 
linearRGB are amongst them.

http://www.w3.org/TR/2003/REC-SVG11-20030114/painting.html#ColorInterpolationProperty

So I'd say whether to use coverage or luma modulation is a property of 
the operation, not an additional channels' job.

Down in pixman this simply means 'let the caller decide'. Some ops are 
done linear (the Xrender stuff), some are done gamma-encoded. Ideally 
with the default left to an environment variable to ease transition.

Does that make sense?

> 
>>> 	- Alternative for component alpha
>>> 	  - Set component-alpha on the output image.
>>> 	    - This means each of the components are sampled
>>> 	      independently and composited in the corresponding
>>> 	      channel only.
>>> 	  - Have 3 x oversampled mask
>>> 	  - Scale it down by 3 horizontally, with [ 1/3, 1/3, 1/3 ]
>>>             resampling filter. 	    Is this equivalent to just
>>> using a component alpha mask?
>> If I got it right, this is basically how one creates a component-alpha
>> image. As I view it, component-alpha should be a 'mastered'
>> representation, a final step before compositing. IOW, except for
>> compositing there is nothing sensible to do with component-alpha
>> images. This is simply so because CA images already encode the
>> anticipated layout of the components on screen.
>>
>> So what you're describing is not an alternative, but the preceding
>> step. Maybe you could clarify the intent a bit?
> 
> The idea is that you could do the component-alpha *after* compositing,
> and the question is whether that would just be equivalent to using a
> component-alpha mask, or if it would produce higher quality. This is
> similar to the idea of full-screen antialiasing, which can be viewed
> as post-compositing supersampling.
Now I got it. I think it could potentially produce higher quality, but I 
don't know the use cases you have in mind where preconditions for higher 
quality would be met.

> 
> Both the intensity vs. brightness and the component alpha text are in
> the 'maybe crack' section, so don't take either as any sort of
> fully-formed masterplan.
> 
> 
> Soren
> 



More information about the xorg-devel mailing list