Tearing problem at bigger overlay sizes

Christiaan van Dijk dvbmail at xs4all.nl
Fri Jan 9 13:01:12 PST 2009


Alex Deucher wrote:
> On Thu, Jan 8, 2009 at 3:08 PM, Christiaan van Dijk <dvbmail at xs4all.nl> wrote:
>   
>> Hello,
>>
>> I have an Albatron KI-690 mainboard with an RS690 integrated graphics
>> adapter running OpenSuSE 11.1. I have been trying different options for
>> the display driver. Most solutions show severe tearing during video
>> playback (diagonal line from bottom left to top right). Right now I'm
>> using the radeon driver with the following options:
>>
>> Option "AccelMethod" "EXA"
>> Option "EXAVSync" "on"
>>
>> This setup seemed to work perfectly but some problems popped up. I'm
>> using a full-HD 1920x1080p/50Hz TV for output. When playing SD 4:3
>> (scaled to 1440x1080) with MythTV the picture is perfect, when playing
>> SD 16:9 (scaled to 1920x1080) the tearing is back. The same can be
>> reproduced with Mplayer, when increasing the image size there's a point
>> at which the tearing is back At smaller sizes there's is also a black
>> bar on the right side of the image when moving the image to the right
>> side of the screen. The black bar varies with the window size (bigger
>> image, wider bar), the rest of the picture is correct.
>>
>> I suspect the waiting for the vsync is skipped when the image size is
>> increased, I've been looking in the EXA code but I'm not sure where to
>> look. Any suggestions or places to look in the code would be very welcome.
>>     
>
> The engine is still stalling for the vline.  The problem is you are
> hitting the hw guardband limits on r3xx/r4xx.  The diagonal tearing is
> due to the fact that the hw renders a quad as two triangles.  To avoid
> this we render to a single clipped triangle; this means the triangle
> is twice the width and height of the video.  Due to the limits
> mentioned above, the max triangle dimension is 2880.  Since we use a
> double sized clipped triangle, you can only use it for rendered videos
> up to 1440.  Beyond that the driver reverts back to using a quad.  You
> might be able to render the video using point sprites which would
> avoid the 1440 limit.
>
> Alex
>
>   
Hi,

thanks for the explanation, this really helps in understanding the 
problem. I did some more reading in the code to understand how the 
process works, if I'm right the image is send to the display in 
"radeon_textured_videofuncs.c" before displaying the image the system 
waits for the vertical lines overlapping with the window to pass in 
"RADEONWaitForVLine". In my case the xorg modeline contains 1125 lines 
of which 1080 are visible, in the 45 lines the display should be 
updated.  If I calculate right this should be around 800us at 50Hz. So 
the graphics processor can not update the entire screen within this time?

If this is the case I think there have to be other solutions which 
should solve this problem. For example if the screen is split in a top 
and bottom half, wait for the top half to finish with 
"RADEONWaitForVLine", once finished update this part with the new image. 
In the second step wait for the bottom half to finish and update the 
bottom part. This should give a perfect picture since only non-active 
display areas are updated.
This will not work for interlaced displays although there could be some 
solutions here also.
When writing the display in two parts also the clipped triangle trick 
could be used since a 1920x960 box fits within a 2880x2880 triangle. The 
test in the code to skip the triangle trick could be changed to 
dstw+dsth > 2880 to optimize use of this method.

I will study the code some more and see if I can figure out how to do 
something like this. Right now it's not completely clear where the data 
is coming from. I would expect the data to be present in a part of 
memory which is copied by the graphics processor, right? Anyway I will 
first look in more detail to the code.

Christiaan.



More information about the xorg-driver-ati mailing list