[Xorg-driver-geode] Update the Rotate operation issue
leio at gentoo.org
Wed Jun 30 22:53:48 PDT 2010
On N, 2010-07-01 at 10:49 +0800, Cui, Hunk wrote:
> Hi, all,
> As mentioned in the patch:
> · [Xorg-driver-geode] [PATCH 1/2] Gamma Correction for fading
> · [Xorg-driver-geode] [PATCH 2/2] Modify Ctrc rotate operation
> and Graphic memory allocate
> Rotate issue, Focus point is on Graphic memory allocate.
> 1. Redistribute the Graphic memory (include in MemorySize).
> 2. Prepare allocate the shadow buffer, normally do Rotate operation.
> 3. Classic exa is made on the assumption that all memory usable for
> gpu acceleration is a linear range between memoryBase and memorySize.
> 4. Before modify, the Rotate_mem after memorySize could belong to
> another device. This is the error key point.
> 5. Rotateeddata has to be allocated between memoryBase and memorySize.
> 6. Recommand normally working: Setup Graphic memory in BIOS >=14MB
> I modify the code and update two patchs for test, now the update
> methods have been test in Ubuntu desktop and Fedora desktop. (Normally
> Welcome everyone test the code.
Hunk tells me that FreeSize (without his patch) is used by shadow buffer
and video overlay buffer on demand.
The problem with pre-allocating a shadow buffer (which this patch makes
it do) is, that this means we'll have always less memory free for video
overlay. Therefore in case of limited video memory, we might not have
enough free memory to create the video overlay. But if we'd only
allocate rotation shadow buffer while necessary, there would have been
enough memory for video overlay if user is in normal mode - not rotated.
I believe such a feature was added to the code with commit cf0655edbcbd.
I think we should step back and re-investigate this whole video memory
business completely instead:
It seems lx_memory.c and the associated GeodeLX's own video memory
handling code in lx_memory.c (such as GeodeAllocOffscreen() function)
dates back to when RandR 1.2 support was added in commit d681a844e44.
Before RandR 1.2 support, the code happily used EXA facilities to
request for offscreen memory for shadow buffer, but for some reason
Jordan decided that's bad in RandR 1.2 based rotation code...
Obviously that commit is also completely void of any explanation of why
things were done this way (adding our very own memory management among
I think we should investigate using exaOffscreenAlloc once again, and
give all the free video memory to EXA for handling, instead of reserving
it for video overlay and shadow buffer. This way EXA has more memory
available for offscreen pixmaps while the user is not in rotated mode,
or isn't using video overlay (XVideo extension using video players).
Then when shadow buffer or video overlay is necessary, we should be able
to simply request for it on-demand with exaOffscreenAlloc. It will evict
offscreen pixmaps as necessary to make room for our consecutive video
memory needs and give us that video memory we need.
But I don't know why Jordan didn't continue to use exaOffscreenAlloc and
wrote our own offscreen memory management, maybe he had a good reason
for RandR 1.2...
So maybe as a first step, we could investigate making shadow buffer and
video overlays (perhaps in separate commits?) be allocated on-demand via
exaOffscreenAlloc (and offscreen memory released back for offscreen
pixmap use with exaOffscreenFree when not needed), so we don't need to
reserve free memory for them separately at all. This way we don't need
to worry about reserving enough free memory, and EXA can make use of
that memory for keeping more pixmaps at once or fit bigger pixmaps in
video memory to work more effectively when we aren't in rotated mode, or
aren't playing any videos over XVideo.
Some gotcha's I know:
* exaOffscreenAlloc will evict us on VT switch and suspend even if
locked, probably need to handle that - see exaOffscreenAlloc doxygen
documentation in xserver exa/exa_offscreen.c
* exaOffscreenAlloc might return NULL if it can't give us that
consecutive memory (see exaOffscreenAlloc() code). Probably not enough
video memory given to EXA in the first place (probably because we didn't
have any more video memory to give), so our current method would fail
too, but we should handle NULL returns gracefully regardless.
Apparently exaOffscreenAlloc was already suggested before on xorg-devel
mailing list and I wasn't following those threads :(
Some excerpts from discussion with Dave Airlie on #xorg-devel IRC about
the topic follow.
First excerpt is when talking about offscreen memory needs in the driver
itself, primarily in the context of temporary buffers necessary for
hardware acceleration code, but this actually applies even more to
shadow buffer and video overlays:
[when talking about the difference of EXA classic, mixed and driver:]
<airlied> leio: classic is the original implementation where EXA manages
memory itself, driver is where the driver allocates pixmap memory and
manages it, mixed lets the driver manage pixmap memory, but allows for
system memory pixmaps like classic and has migration
<leio> and in classic mode, can the driver ask for some chunk of
dynamically handled offscreen memory?
<leio> I guess so, we just suck in using it
<airlied> the driver just gives EXA a chunk of memory to manage
<airlied> it can't do anything more after that
<leio> like if we need a chunk of offscreen video memory to implement a
hardware acceleration in the driver that requires two passes
<leio> we'd need to use mixed or driver?
<airlied> yeah you can ask EXA for it back
<leio> in classic?
<airlied> yeah lemme find it, its been a while ;-)
<leio> or lets put it this way - what mode would be ideal for geode :)
<leio> I'll believe it and get us converted to that :P
<airlied> hmm has a geode got real VRAM?
<leio> it has dedicated video memory, that is gotten from system RAM
<leio> but blit results and the like need to end up there
<leio> So UMA, but not really
<airlied> can the gpu only access things in that static memory range?
<remi|work> leio, there's no gpu<->cpu mapping?
<leio> I think it can read from system memory, but not write to it.
<airlied> so yeah it sounds like exa classic is probably best for it
<leio> and how'd we handle our temporary buffer needs? Just not give all
the memory to EXA and leave an arbitrary static amount to us?
<airlied> yeah or ask exa for memory back using offscreenalloc
<leio> (I think that's what's done now, just we could make use of a
bigger chunk if available)
<leio> airlied: and the more I give to EXA, the less it needs to
fallback just because active pixmaps don't fit GPU offscreen memory?
(ignoring all the pixmap ping-pong happening right now anyway due to
many unaccelerated cases)
<airlied> leio: pretty much, the more you have the less evicting needs
And now talk from today about shadow buffer and video overlay:
<leio> would exaOffscreenAlloc be guaranteed to give large consecutive
chunk of memory (for on-demand allocation of shadow buffer or video
overlay memory), defragmenting and evicting pixmaps as necessary?
<airlied> leio: pretty much
<leio> airlied: hence it is not optimal to pre-reserve memory in the
driver by not giving it all under EXA, as by giving it to EXA and asking
it back on-demand, we'd have more GPU memory for pixmaps during the time
the user isn't in rotation and/or doesn't use video overlay? :P
More information about the Xorg-driver-geode