int_least64_t vs int_fast64_t vs int64_t
On your platform, they're all names for the same underlying data type. On other platforms, they aren't.
int64_t
is required to be EXACTLY 64 bits. On architectures with (for example) a 9-bit byte, it won't be available at all.
int_least64_t
is the smallest data type with at least 64 bits. If int64_t
is available, it will be used. But (for example) with a 9-bit byte machine, this could be 72 bits.
int_fast64_t
is the data type with at least 64 bits and the best arithmetic performance. It's there mainly for consistency with int_fast8_t
and int_fast16_t
, which on many machines will be 32 bits, not 8 or 16. In a few more years, there might be an architecture where 128-bit math is faster than 64-bit, but I don't think any exists today.
If you're porting an algorithm, you probably want to be using int_fast32_t
, since it will hold any value your old 32-bit code can handle, but will be 64-bit if that's faster. If you're converting pointers to integers (why?) then use intptr_t
.
int64_t
has exactly 64 bits. It might not be defined for all platforms.
int_least64_t
is the smallest type with at least 64 bits.
int_fast64_t
is the type that's fastest to process, with at least 64 bits.
On a 32 or 64-bit processor, they will all be defined, and will all have 64 bits. On a hypothetical 73-bit processor, int64_t
won't be defined (since there is no type with exactly 64 bits), and the others will have 73 bits.