Road map for remaining pixman refactoring
Soeren Sandmann
sandmann at daimi.au.dk
Mon Jun 8 21:40:36 PDT 2009
Simon Thum <simon.thum at gmx.de> writes:
> Soeren Sandmann wrote:
> > - bits_image_fetch_pixels() then calls a low-level pixel fetcher
> > whose job it is to convert from whatever the image format is to
> > PIXMAN_a8r8g8b8.
> Just a question: does that imply algos will work on 8-bit data? I'm
> asking since that would block the easy path to gamma-correct ops which
> you already explored in the gamma branch.
No, the existing support for compositing with 16 bit channels will not
go away, and support for other formats could be added. But currently
for almost all formats, the 16 bit support works by first fetching 8
bit channels, then expanding those to 16 bit.
> Also, I'd like to take the chance to comment on some things in the
> refactoring document:
>
> > - Luminance vs. coverage for the alpha channel
> > Whether the alpha channel should be interpreted as luminance
> > modulation or as coverage (intensity modulation). This is a
> > bit of a departure from the rendering model though. It could
> > also be considered whether it should be possible to have
> > both channels in the same drawable.
> I think that coverage is the way to go. Luminance (most likely what is
> meant is Luma: http://en.wikipedia.org/wiki/Luma_(video)) is scaled by
> simply multiplying components by one factor; in other words, there is
> always a coverage which corresponds to a chosen luma factor inside
> 0..1.
Yes, I meant brightness ("perceptual luminance").
I don't think treating alpha as coverage always is really
correct. Ignoring all gamma issues, suppose someone selects a 50%
translucent white in cairo, then composites that on top of a black
background. The resulting pixels will be 50% gray, but more
importantly *look* 50% gray because of sRGB's being roughly
perceptually uniform. Which is exactly what the user would expect.
It is difficult to argue that this outcome is somehow wrong.
There are several other cases where alpha really should be treated as
a brightness modulation: gradients, fading, probably image overlays
come to mind. Generally, when the alpha value is explicitly given by
the user, brightness modulation is probably what he had in mind.
On the other hand, when the alpha value comes from antialiased polygon
rasterization, an intensity modulation is clearly desired.
Ideally, images would have both a coverage and a translucency channel.
> > - Alternative for component alpha
> > - Set component-alpha on the output image.
> > - This means each of the components are sampled
> > independently and composited in the corresponding
> > channel only.
> > - Have 3 x oversampled mask
> > - Scale it down by 3 horizontally, with [ 1/3, 1/3, 1/3 ]
> > resampling filter. Is this equivalent to just
> > using a component alpha mask?
> If I got it right, this is basically how one creates a component-alpha
> image. As I view it, component-alpha should be a 'mastered'
> representation, a final step before compositing. IOW, except for
> compositing there is nothing sensible to do with component-alpha
> images. This is simply so because CA images already encode the
> anticipated layout of the components on screen.
>
> So what you're describing is not an alternative, but the preceding
> step. Maybe you could clarify the intent a bit?
The idea is that you could do the component-alpha *after* compositing,
and the question is whether that would just be equivalent to using a
component-alpha mask, or if it would produce higher quality. This is
similar to the idea of full-screen antialiasing, which can be viewed
as post-compositing supersampling.
Both the intensity vs. brightness and the component alpha text are in
the 'maybe crack' section, so don't take either as any sort of
fully-formed masterplan.
Soren
More information about the xorg-devel
mailing list