Is there any way to compute the width of an integer type at compile-time?
There is a function-like macro that can determine the value bits of an integer type, but only if you already know that type's maximum value. Whether or not you'll get a compile-time constant depends on your compiler but I would guess in most cases the answer is yes.
Credit to Hallvard B. Furuseth for his IMAX_BITS() function-like macro that he posted in reply to a question on comp.lang.c
/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
+ (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))
IMAX_BITS(INT_MAX) computes the number of bits in an int, and IMAX_BITS((unsigned_type)-1) computes the number of bits in an unsigned_type. Until someone implements 4-gigabyte integers, anyway:-)
And
/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))
**Remember that although the width of an unsigned integer type is equal to the number of value bits, the width of a signed integer type is one greater (§6.2.6.2/6).** This is of special importance as in my original comment to your question I had incorrectly stated that the IMAX_BITS() macro calculates the width when it actually calculates the number of value bits. Sorry about that!
So for example IMAX_BITS(INT64_MAX)
will create a compile-time constant of 63. However, in this example, we are dealing with a signed type so you must add 1 to account for the sign bit if you want the actual width of an int64_t, which is of course 64.
In a separate comp.lang.c discussion a user named blargg gives a breakdown of how the macro works:
Re: using pre-processor to count bits in integer types...
Note that the macro only works with 2^n-1 values (ie all 1s in binary), as would be expected with any MAX value. Also note that while it is easy to get a compile-time constant for the maximum value of an unsigned integer type (IMAX_BITS((unsigned type)-1)
), at the time of this writing I don't know any way to do the same thing for a signed integer type without invoking implementation-defined behavior. If I ever find out I'll answer my own related SO question, here:
C question: off_t (and other signed integer types) minimum and maximum values - Stack Overflow
Compare the macros from <limits.h>
against known max values for specific integer widths:
#include <limits.h>
#if UINT_MAX == 0xFFFF
#define INT_WIDTH 16
#elif UINT_MAX == 0xFFFFFF
#define INT_WIDTH 24
#elif ...
#else
#error "unsupported integer width"
#endif