Xvideo performance on Radeon 7500 vs Intel 915
Ken Mandelberg
km at mathcs.emory.edu
Tue Dec 19 10:32:53 PST 2006
> From: Roland Scheidegger <sroland at tungstengraphics.com>
> Michel Dänzer wrote:
>> On Thu, 2006-12-14 at 11:52 -0500, Ken Mandelberg wrote:
>>> After the failure, the driver tries
>>>
>>> xf86QueryLargestOffscreenLinear(pScreen, &max_size, 16,
>>> PRIORITY_EXTREME);
>>>
>>> and max_size comes back 1,479,808 which is too small.
>>>
>>> So I guess myth is using enough of the "OffscreenLinear" in the GUI to
>>> leave not enough left for the actual video.
>>>
>>> I've turned off myth's preview mpg imaging, and its not apparent to me
>>> where the qt myth interface is using the resource. It doesn't show up in
>>> the alloc points I'm catching in radeon_video.c.
>> The only offscreen memory usage that I know of that can't be overridden
>> by the driver Xv code is for GLX renderbuffers and textures. Try
>> reducing the amount reserved for textures with Option "FBTexPercent".
> That'll only work with exa though, with xaa the amount reserved for
> textures can only be increased.
> I guess that means myth has a 3d frontend which will cause the driver to
> reserve backbuffer, zbuffer and memory for textures. You could of course
> use the usual workarounds for not enough memory, decrease resolution,
> decrease color depth to 16bit, decrease z-buffer to 16bit, or even don't
> allocate a back buffer... None of that is a really good solution though.
>
> Roland
Actually the "FBTexPercent" option worked! I lowered it from 50% to 20%
which left enough memory for xf86AllocateOffscreenLinear to succeed.
This code (and my diagnostics) are inside a "#ifdef USE_XAA", so I'm
pretty sure I'm using xaa.
At any rate that problem seems solved.
The next problem is that even though I get just about get enough
performance to locally play HD mpeg's, there is not enough margin left
to handle the network overhead for streaming video (say from mythbackend).
So back to performance. I presume that the reason the Intel 915 uses
almost no cpu time in Xorg is that the client is writing directly to
system ram shared with graphics chip, while in the Radeon case Xorg has
to copy the data to video ram.
Do I have that right, and if so is there an inherent advantage for HD
video in using shared video ram that the client can get to?
More information about the xorg
mailing list