Performance change from X in Fedora Core 4 to Fedora Core 5

Felix Bellaby felix at
Sun Jul 16 06:17:03 PDT 2006

On Sun, 2006-07-16 at 09:53 +0900, Carsten Haitzler wrote:
> On Sat, 15 Jul 2006 15:42:33 -0700 Allen Akin <akin at> babbled:
> > On Sat, Jul 15, 2006 at 04:42:03PM +0900, Carsten Haitzler wrote:
> > | On Fri, 14 Jul 2006 23:32:41 -0700 Allen Akin <akin at> babbled:
> > | > The static allocation problem is fixable.  Overcommitting video ram is a
> > | > bigger problem, and compositing systems are naturally more likely to
> > | > suffer from it because they require unbounded amounts of memory.
> > | 
> > | indeed they will be much more video ram hungry. it'd actually be nice to be
> > | able to query the location of a pixmap (or video vs agp vs system ram
> > | resources) and so possibly do smart things - like just free up excess
> > | pixmaps when resources get low.
> > 
> > Agreed, although to me, the most interesting question is what part of
> > the software stack has responsibility for doing the smart stuff.  As we
> > build things today, neither drivers, server, toolkits, nor apps have
> > enough information about the state of the whole system to know what to
> > do.  I think we will eventually want a component on the server side with
> > a global view (for lack of a better term, a scene graph; though it
> > wouldn't look exactly like scene graphs used by games or simulators).
> we also want it on the client side. the clients ASK for the resources - they
> don't know when to stop or slow down asking. they dont have a "you can ask from
> this point on - but you will pay the price if you do" point. as far as they are
> concerned they have unlimited resources and that is a patent fallacy. they may
> choose to ignore the info and hints and work as they do now - just keep asking
> for more resources until things literally fall apart (allocs fail), BUT they
> should have an idea of where to stop, slow down, free not-so-essential resources
> (as only the clients really know WHY those resources exist).
> > | > That's true in today's Open Source desktop environments.  It wasn't true
> > | > in the old Iris GL days, for example, so it might not always be true in
> > | > the future.  Or in Vista and recent versions of OS X.
> > | 
> > | sure - but how many "3d" apps will you have at once - windows with 3d views?
> > | not TOO many. i know where you are coming from - but i don't see things
> > | going to heavily 3d in the X world... any time soon... if ever.
> > 
> > Part of coming from the OpenGL world is that I don't believe there's
> > much real difference between 2D and 3D (and video, for that matter).  We
> > have a lot of historical reasons for having separate APIs, different
> > concepts of drawing surfaces, different models for color, etc., but
> > pretty much all of graphics boils down to transformations and
> > sampling[1], so in the end life will be better if we acknowledge that
> > and design more unified systems.
> sure - i agree. though i guess i'm coming from the POV of the conceptual "3d
> apps generally have an entirely scalable viewport where you describe a world in
> terms of polygons" and 2d apps think in pixels and demand pixel-perfect
> alignment etc. having your scrollbar has small gaps, or some scrollbars be 5
> and some 4 pixels wide because of rounding differences etc. is unacceptable.
> doing a ui in terms of fully 3d primitives (widgets, buttons etc.) invariably
> leads to a disastrous output result as rounding and certain scaling makes
> fonts look bad as they are "off-by 1" and other such things. sure - the HUD's
> on modern games are ok for example - BUT these i consider as "2d" - they think
> in the 2d world. so defining my widget hierarchy as a set of 3d primitives and
> hope that things get scaled, aligned and drawn right - in my experience, is
> not going to happen. you can argue with growing resolution the off-by-1's are
> going to matter less and less.

I am really not convinced that this is true. Integer specified drawing
operations in GL are just as accurate as those with the Xlib, as Xgl
demonstrates. Inaccuracies do arise when you start scaling or distorting
things, but that is equally true of the 2d-in-3e textured compositing
currently being used.

The real inefficiency in the current Xgl compositing scheme is that most
of the time the GL commands required to draw pixel exact images are
being passed to the server and used to draw into a texture that is then
mapped onto the screen as a completely undistorted image.  It would make
much more sense for those commands to be used by the server to draw onto
the screen directly. 

> anyway - but to the point - i do agree - that as the nuts and bolts under it
> all - a texture and a pixmap should not be considered differently. a pbuffer, a
> gl buffer, window, pixmap etc. should all be 100% interchangeable pixel
> sources/destinations. the operations to draw could be done by the 3d pipeline
> FOR the 2d operations just fine - that's a matter of the drivers simply changing
> what parts of the gpu they use for 2d :)

This is missing the point, drawing onto a texture and mapping it as an
undistorted image on the screen is basically a huge waste of time. I
have deliberately done exactly this in the trivial GL based compositor
that I have posted onto this list. Running without compositing, my 6150
GPU can run glxgears at 1,500 fps, while Dragoran apparently gets 15,000
fps out of his 7800 GTX. However, BOTH of these cards are limited to
around 900 fps when run under my compositor!!

> > | a scene graph for the whole display - imho is the job of something like
> > | croquet and is out of the scope of what we were discussing :)
> > 
> > Maybe.  The fundamental questions (like resource management) come up
> > time after time, and I think one of the reasons is that the puzzle is
> > missing a piece.  We can work around the hole for a long time, but
> > eventually we'll want to go looking for that piece. :-)
> i think resource-management-wise, it is good to provide as much information to
> as high a level as possible to ALLOW for something to be done. right now there
> just is nothing to go by. :(

If servers and compositors are not going get scene graphs from their
clients then let us at least give the GL compositors access to the
indirect GLX streams generated by the apps or Xgl server. These could be
composited much more efficiently than the pre-rendered Pixmaps passed by


More information about the xorg mailing list