Performance change from X in Fedora Core 4 to Fedora Core 5

Felix Bellaby felix at bellaby.plus.com
Thu Jul 13 02:59:30 PDT 2006


On Thu, 2006-07-13 at 08:01 +0200, Clemens Eisserer wrote:
> > it is give and take. you incur some extra expense for a temporary pixmap
> > allocation and de-allocation in return for not holding on to that video ram
> > permanently and thus eventually running out of video ram and needing to swap
> It seems as we don't get ahead anymore.
> 
> Keeping in mind that the extra pixmap most likely will be used only
> extensivly in short timeframes (user is activly using the gui,
> expose-events because another window is moved in front, ....) and that
> in this short timeframes the goal is to archive highest performance I
> think there must be better ways than allocation and deallocation every
> expose.
> 
> I don't see a problem with memory useage, almost all graphic-cards
> have more than 16mb which is more than enough.
> Furthermore implement almost all drivers I know about quite clever
> swaping algorytmns and who cares about a pixmap in RAM which isn't
> used frequently anyway...
> 
> I think a lot vram can be saved if you take a bit care about how long
> holding the pixmap:
> 
> Solution 1: only hold it when the window is visible or in a state
> where a expose could happen.
> Solution 2: Solution 1 + some algorytmns which free the pixmap after
> it has not been used for xy seconds.
> Solution 3: Your idea ;-)

Since nVidia are managing to pipeline to these pixmaps, it is likely
that the alloc/dealloc process is an illusion. Presumably, they are
firing the drawing operations somewhere into a permanently allocated
area of video memory and just making it look like new memory to the
client side. The GPU has to know where the final rastering is to go
somewhere before the end of the pipeline and probably needs the info
right at the start.

Felix




More information about the xorg mailing list