Projective transforms in RandR

Keith Packard keithp at keithp.com
Sat Jul 14 12:52:47 PDT 2007


I've got this application which needs to scale screen output from
1024x768 to 1024x600 -- the device screen just isn't as big as many web
pages would like it to be. As the device doesn't have any kind of magic
down-sampling display hardware, this needs to be done in software.

If you look at the current RandR rotation support, you'll note that it
computes a projective transformation matrix and just uses Render to
rotate the application-visible frame buffer to a shadow which is mapped
to the hardware. Making this accept an arbitrary projective
transformation matrix turns out to be fairly easy. A nice side-effect of
the implementation rework is that reflection now works.

In any case, now that the server can support arbitrary projective
transformations, the question is how I should expose these through the
protocol.

Right now, I kludged together a pending TRANSFORM property; set that,
change the mode and the new mode will use the contents of the TRANSFORM
property to modify the shadow update path. I fixed the mouse motion code
to also follow the transformation, so pointer coordinates are reflected
accurately. What I didn't do is run the mouse image through the
transformation, so you get the original untransformed mouse image.

There are three problems with this implementation:

 1) The existing "mode must fit within framebuffer" constraints
    don't make sense any more.  I could simply eliminate these;
    potentially using the existing shadow mechanism to ensure that
    a scanout buffer of sufficient size was available.

 2) When reporting the 'size' of the crtc through RandR and Xinerama,
    the DIX code will continue to refer only to the mode and rotation,
    the result is that applications see an inaccurate picture of the
    system.

 3) This is really a property of the CRTC, not the output, and yet we
    don't have 'crtc properties'. I kludge this by just looking at all
    of the outputs and pulling the first TRANSFORM I find. Setting
   different transforms on different outputs has no effect.

I could fix these by making the transform explicit in the protocol
instead of using a property.

A second order effect is that this mechanism overlaps with the existing
LVDS scaling techniques. The LVDS output always runs at the native
resolution and frequency; changing to a different mode simply engages
the LVDS scaler to source a smaller number of pixels. The result is that
you really have only one mode with an LVDS panel, but the hardware will
scale from an arbitrary source image. Effectively, we're lying right now
about the modes the LVDS supports, and we limit what people can do by
limiting the scaling to a subset of the potential capabilities of the
hardware. Yes, you can add a new mode to the LVDS output and use that,
but still, you'll be scanning at the native LVDS frequency. Should we
remove this kludge and instead expose the hardware capabilities
directly?

The code for this stuff is on the randr-transform branches of:

xserver	git://people.freedesktop.org/~keithp/xserver.git

xrandr  git://people.freedesktop.org/~keithp/xrandr.git


-- 
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.x.org/archives/xorg/attachments/20070714/48775b8d/attachment.pgp>


More information about the xorg mailing list