Size of int, short, long, long long?
On my computer long is 64 bits in Linux.
Windows is the only major platform that uses the 32-bit longs in 64-bit mode, exactly because of the false assumptions being widespread in the existing code. This made it difficult to change the size of long on Windows, hence on 64-bit x86 processors longs are still 32 bits in Windows to keep all sorts of existing code and definitions compatible.
The standard is by definition correct, and the way you interpret it is correct. The sizes of some types may vary. The standard only states the minimum width of these types. Usually (but not necessarily) the type int
has the same width as the target processor.
This goes back to the old days where performance was a very important aspect. Se whenever you used an int
the compiler could choose the fastest type that still holds at least 16 bits.
Of course, this approach is not very good today. It's just something we have to live with. And yes, it can break code. So if you want to write fully portable code, use the types defined in stdint.h
like int32_t
and such instead. Or to the very least, never use int
if you expect the variable to hold a number not in the range [−32,767; 32,767]
.
I have always heard that
long
is always 32-bits and it is strictly equivalent toint32_t
which looks wrong.
I'm curious where you heard that. It's absolutely wrong.
There are plenty of systems (mostly either 16-bit or 32-bit systems or 64-bit Windows, I think) where long
is 32 bits, but there are also plenty of systems where long
is 64 bits.
(And even if long
is 32 bits, it may not be the same type as int32_t
. For example, if int
and long
are both 32 bits, they're still distinct types, and int32_t
is probably defined as one or the other.)
$ cat c.c
#include <stdio.h>
#include <limits.h>
int main(void) {
printf("long is %zu bits\n", sizeof (long) * CHAR_BIT);
}
$ gcc -m32 c.c -o c && ./c
long is 32 bits
$ gcc -m64 c.c -o c && ./c
long is 64 bits
$
The requirements for the sizes of the integer types are almost as you stated in your question (you had the wrong size for short
). The standard actually states its requirements in terms of ranges, not sizes, but that along with the requirement for a binary representation implies minimal sizes in bits. The requirements are:
char
,unsigned char
,signed char
: 8 bitsshort
,unsigned short
: 16 bitsint
,unsigned int
: 16 bitslong
,unsigned long
: 32 bitslong long
,unsigned long long
: 64 bits
Each signed type has a range that includes the range of the previous type in the list. There are no upper bounds.
It's common for int
and long
to be 32 and 64 bits, respectively, particularly on non-Windows 64-bit systems. (POSIX requires int
to be at least 32 bits.) long long
is exactly 64 bits on every system I've seen, though it can be wider.