[fwd] Debian Bug report logs - #414045 - BREAKS XLIB

Matthieu Herrb matthieu.herrb at laas.fr
Sat Aug 15 01:58:26 PDT 2009


Daniel Stone wrote:
> On Mon, Oct 22, 2007 at 10:01:02AM +0200, (not Julien Cristau) wrote:
>> Sorry if this is not the right way of contacting you - I'm not sure
>> where to go with this one.
>>
>> The "fix" made to XCreateImage breaks the protocol between client and
>> Xlib because the extended test makes a wrong assumption.
>>
>> XCreateImage now compares the servers bits-per-pixel with the
>> bits_per_pixel in the supplied image (for ZPixmap). 
>>
>> In our case we have code using 24 bits_per_pixel for depth 24 images (3
>> bytes per pixel). Many X servers use 32 bits_per_pixel for depth 24
>> pixmaps.
>>
>> The "fix" now require that image data MUST use the same bits_per_pixel
>> as the X-server!
>>
>> We have had a lot of problems as the "fix" is being distributed as part
>> of Sun Solaris security updates!
> 
> Hi,
> So you'd like to create images with a different bpp, but equal depth
> (e.g. packed 24)?  Good catch on the memory leak, though.
> 
> Can you elaborate more on why the 'bits_per_line < min_bits_per_line'
> test is wrong?
> 

Hi,

I recently got more reports that this fix is causing problems in real
world applications (Cadence Allegro on Solaris).

I have a few questions here:

- Daniel, is creating images with a different bpp, but equal depth,
something bad (ie which can have security issues) or just something
weird that most applications should do unless they know what they are doing?

- is the bits_per_line < min_bits_per_line test wrong only because of
the case above (since min_bits_per_line is computed from the server bpp
it can be larger than the actual value passed), or are there other cases?

-- 
Matthieu Herrb


More information about the xorg-devel mailing list