[qubes-devel] Re: Does Xorg server pin composition buffer's MFNs?

Rafal Wojtczuk rafal at invisiblethingslab.com
Sun Apr 18 01:30:45 PDT 2010


On Fri, Apr 16, 2010 at 11:41:51AM -0400, Adam Jackson wrote:
> On Fri, 2010-04-16 at 10:47 +0200, Rafal Wojtczuk wrote:
> > On Wed, Apr 14, 2010 at 03:59:39PM -0400, Adam Jackson wrote:
> > > I'm assuming by "composition buffer" you mean the thing you're actually
> > > scanning out on the display.
> >
> > No. I mean the per-window offscreen storage mechanism, activated by 
> > XCompositeRedirectSubwindows() function. Referred to as "windows backing
> > pixmap" in http://ktown.kde.org/~fredrik/composite_howto.html. Apologies if
> > I did not make it clear enough.
> 
> Window pixmaps are like any other pixmap.  Where they live is entirely
> up to the driver.  Unaccelerated drivers keep them in host memory with
> malloc().
While we are at it: assuming nonaccelerated driver (in fact, in the actual
setup, dummy_drv.so is used), is the pixmap->devPrivate.ptr field guaranteed
to hold the pointer to actual pixels, or is it purely implementation (X server 
version) dependent and may change in future ? 
You can see the actual code at 
http://gitweb.qubes-os.org/gitweb/?p=mainstream/gui.git;a=blob;f=xf86-input-mfndev/src/qubes.c;h=6b898c25f2bc4d7c68da45764c565157b96ddd4a;hb=aca457004e731da0b642486d8b6f01a9f2b76c4d#l404

>  Accelerated drivers do that sometimes, and then sometimes put
> them in video memory.  Remember that you can have more video memory than
> you can see through a PCI BAR, so you might not be able to address the
> pixmap from the CPU at all.

> 
> > Briefly, the goal is to get the location of a composition buffer created by 
> > X server running in virtual machine A, and map it in the address space of 
> > virtual machine B. Such mapping has to be defined in terms of physical 
> > addresses; consequently, it is crucial to make sure that the frames backing a 
> > composition buffer do not change in time.
> 
> That's not going to do anything useful without some synchronization
> work.  Window pixmaps aren't created afresh for each frame.  They're
> long-lived.  If you manage to get a pixmap shared between VMs A and B,
> there's nothing to stop A from rendering into it while B is reading from
> it.
Currently synchronization is done by damage extension events. It seems to
work: a video player running in A in full screen is correctly displayed in
B, all other apps work fine as well.

> The way compositing managers handle this is by taking an X server grab
> while reading out of the window pixmap, which does prevent other client
> rendering from happening.  And as soon as you're doing _that_, just
> XGetImage the pixels out instead of playing funny MMU games, it'll
> probably be faster.
I beg to differ. XGetImage is a hopeless CPU hog - it transfers actual pixels
over the connection to Xserver, so over the unix socket ? So, you would need
XGetImage+copy XImage to B+XPutImage, which is 3 memory copies, 2 of them
involving additional unix socket overhead. With MIT SHM extensions you can get
rid of unix socket transfers, but still it is 3 memcpys.
In the current qubes code, X server running in B maps the composition
buffers from A, and then uses (slightly extended version of) XShmPutImage to 
display them. No memory copies, besides ones needed by XShmPutImage to push 
content to VGA.

BTW, it is possible that in case of accelerated driver, XShmPutImage uses
DMA from the MIT SHM region directly to VGA memory ?

Regards,
Rafal Wojtczuk
The Qubes OS Project
http://qubes-os.org



More information about the xorg mailing list