What's the difference between "int" and "int_fast16_t"?
int
is a "most efficient type" in speed/size - but that is not specified by per the C spec. It must be 16 or more bits.
int_fast16_t
is most efficient type in speed with at least the range of a 16 bit int.
Example: A given platform may have decided that int
should be 32-bit for many reasons, not only speed. The same system may find a different type is fastest for 16-bit integers.
Example: In a 64-bit machine, where one would expect to have int
as 64-bit, a compiler may use a mode with 32-bit int
compilation for compatibility. In this mode, int_fast16_t
could be 64-bit as that is natively the fastest width for it avoids alignment issues, etc.
int_fast16_t
is guaranteed to be the fastest int with a size of at least 16 bits. int
has no guarantee of its size except that:
sizeof(char) = 1 and sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long).
And that it can hold the range of -32767 to +32767.
(7.20.1.3p2) "The typedef name
int_fastN_t
designates the fastest signed integer type with a width of at least N. The typedef nameuint_fastN_t
designates the fastest unsigned integer type with a width of at least N."
As I understand it, the C specification says that type
int
is supposed to be the most efficient type on target platform that contains at least 16 bits.
Here's what the standard actually says about int
: (N1570 draft, section 6.2.5, paragraph 5):
A "plain"
int
object has the natural size suggested by the architecture of the execution environment (large enough to contain any value in the rangeINT_MIN
toINT_MAX
as defined in the header<limits.h>
).
The reference to INT_MIN
and INT_MAX
is perhaps slightly misleading; those values are chosen based on the characteristics of type int
, not the other way around.
And the phrase "the natural size" is also slightly misleading. Depending on the target architecture, there may not be just one "natural" size for an integer type.
Elsewhere, the standard says that INT_MIN
must be at most -32767
, and INT_MAX
must be at least +32767
, which implies that int
is at least 16 bits.
Here's what the standard says about int_fast16_t
(7.20.1.3):
Each of the following types designates an integer type that is usually fastest to operate with among all integer types that have at least the specified width.
with a footnote:
The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.
The requirements for int
and int_fast16_t
are similar but not identical -- and they're similarly vague.
In practice, the size of int
is often chosen based on criteria other than "the natural size" -- or that phrase is interpreted for convenience. Often the size of int
for a new architecture is chosen to match the size for an existing architecture, to minimize the difficulty of porting code. And there's a fairly strong motivation to make int
no wider than 32 bits, so that the types char
, short
, and int
can cover sizes of 8, 16, and 32 bits. On 64-bit systems, particularly x86-64, the "natural" size is probably 64 bits, but most C compilers make int
32 bits rather than 64 (and some compilers even make long
just 32 bits).
The choice of the underlying type for int_fast16_t
is, I suspect, less dependent on such considerations, since any code that uses it is explicitly asking for a fast 16-bit signed integer type. A lot of existing code makes assumptions about the characteristics of int
that go beyond what the standard guarantees, and compiler developers have to cater to such code if they want their compilers to be used.