unsigned int vs. size_t
In short, size_t
is never negative, and it maximizes performance because it's typedef'd to be the unsigned integer type that's big enough -- but not too big -- to represent the size of the largest possible object on the target platform.
Sizes should never be negative, and indeed size_t
is an unsigned type. Also, because size_t
is unsigned, you can store numbers that are roughly twice as big as in the corresponding signed type, because we can use the sign bit to represent magnitude, like all the other bits in the unsigned integer. When we gain one more bit, we are multiplying the range of numbers we can represents by a factor of about two.
So, you ask, why not just use an unsigned int
? It may not be able to hold big enough numbers. In an implementation where unsigned int
is 32 bits, the biggest number it can represent is 4294967295
. Some processors, such as the IP16L32, can copy objects larger than 4294967295
bytes.
So, you ask, why not use an unsigned long int
? It exacts a performance toll on some platforms. Standard C requires that a long
occupy at least 32 bits. An IP16L32 platform implements each 32-bit long as a pair of 16-bit words. Almost all 32-bit operators on these platforms require two instructions, if not more, because they work with the 32 bits in two 16-bit chunks. For example, moving a 32-bit long usually requires two machine instructions -- one to move each 16-bit chunk.
Using size_t
avoids this performance toll. According to this fantastic article, "Type size_t
is a typedef that's an alias for some unsigned integer type, typically unsigned int
or unsigned long
, but possibly even unsigned long long
. Each Standard C implementation is supposed to choose the unsigned integer that's big enough--but no bigger than needed--to represent the size of the largest possible object on the target platform."
Classic C (the early dialect of C described by Brian Kernighan and Dennis Ritchie in The C Programming Language, Prentice-Hall, 1978) didn't provide size_t
. The C standards committee introduced size_t
to eliminate a portability problem
Explained in detail at embedded.com (with a very good example)
The size_t
type is the unsigned integer type that is the result of the sizeof
operator (and the offsetof
operator), so it is guaranteed to be big enough to contain the size of the biggest object your system can handle (e.g., a static array of 8Gb).
The size_t
type may be bigger than, equal to, or smaller than an unsigned int
, and your compiler might make assumptions about it for optimization.
You may find more precise information in the C99 standard, section 7.17, a draft of which is available on the Internet in pdf format, or in the C11 standard, section 7.19, also available as a pdf draft.