Was |int| always thought to be 32bit ?

Mike A. Harris mharris at www.linux.org.uk
Mon Apr 25 23:40:02 PDT 2005


Roland Mainz wrote:
> Egbert Eich wrote:
> 
>> > While looking at the X11 headers I see that some structure members and
>> > function arguments and return values use |int| - was that datatype
>> > thought to be always 32bit or is there any platform which used 16bit for
>> > |int| to work with X11 code (client side+Xserver), too (parts of the
>> > code use |short| to explicitly say "... it's 16bit..." but I cannot find
>> > a clear statement was |int| should be...) ?
>>
>>Appearantly at one time there used to be X implementations for DOS
>>where int was 16bit unless you used a DOS extender and ran in protected
>>mode. I'm not sure if any Xserver implementation was real mode, though.
>>The use of short as 'it's 16bit' is rather poor. Fixed sizes should
>>be taken form system headers where avaialble or defined in an X header.
> 
> 
> AFAIK all C implementations take |short| as 16bit. Originally (before
> 64bit machines were "invented") |long| was 32bit and |int| was a
> datatype which picked the "natural" representation of an integer for
> this machine type (or better: the representation which was the fastest
> or "best" for register/memory access) ...

"all" is pushing it a bit.  "All C implementations that we support
and care about" might be more realistic.  I don't think we should
assume that the standard integer types have a specific bitwidth
however unless using the stdint.h typedefs, or using our own
wrappers.

Someone is likely to come along complaining they can't build a
workabout X11 on Cray or some other obscure platform.


More information about the xorg-arch mailing list