[PATCH] dix/dispatch: DoGetImage: Use a single buffer
keithp at keithp.com
Sat Mar 29 10:38:01 PDT 2014
Daniel Kurtz <djkurtz at chromium.org> writes:
> DoGetImage chops up an image into IMAGE_BUFSIZE strips.
> This predates commit  (a pull from XFree86 to X.Org), at which time
> IMAGE_BUFSIZE was 8k.
It's worked like this since before R1, which was before the OS layer
resized output buffers.
> Since we are now allocating from the heap, there should no longer be
> necessary, or desirable to chop up the DoGetImage() buffer.
Can you show us performance comparisons so that we can see if it
helps/hurts? There are additional disasters in this code path which
makes the answer less than obvious from the driver level...
What I always wanted to do was get the OS layer to allocate space for
the whole image and get the DDX to dump the data right into that, rather
than allocating a temporary buffer and doing two copies. Something like
ReserveOutputSpace(ClientPtr client, int size);
However, if you look at the FlushClient function, it doesn't track the
start of the output data to send to the client, so we're actually
memmove'ing the data after each write.
So, we should fix that as well; sticking an offset into the output
buffer, and only memmove'ing the remaining data when the amount left in
the buffer is a small amount.
I guess my main question is whether you have a need to improve GetImage
performance, or whether this is strictly a desire to clean up some
code which is presumably sub-optimal?
I did review the code, and it looks to me like it should work as
advertised; if you're interested in further improvements to performance,
please go look at the os layer to see how that part works too.
Reviewed-by: Keith Packard <keithp at keithp.com>
keith.packard at intel.com
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 810 bytes
Desc: not available
More information about the xorg-devel