Is there any way to compute the width of an integer type at compile-time?

There is a function-like macro that can determine the value bits of an integer type, but only if you already know that type’s maximum value. Whether or not you’ll get a compile-time constant depends on your compiler but I would guess in most cases the answer is yes.

Credit to Hallvard B. Furuseth for his IMAX_BITS() function-like macro that he posted in reply to a question on comp.lang.c

/* Number of bits in inttype_MAX, or in any (1<<b)-1 where 0 <= b < 3E+10 */
#define IMAX_BITS(m) ((m) /((m)%0x3fffffffL+1) /0x3fffffffL %0x3fffffffL *30 \
                  + (m)%0x3fffffffL /((m)%31+1)/31%31*5 + 4-12/((m)%31+3))

IMAX_BITS(INT_MAX) computes the number of bits in an int, and IMAX_BITS((unsigned_type)-1) computes the number of bits in an unsigned_type. Until someone implements 4-gigabyte integers, anyway:-)

And credit to Eric Sosman for this [alternate version](http://groups.google.com/group/comp.lang.c/msg/e998153ef07ff04b?dmode=source) that should work with less than 2040 bits:
**(EDIT 1/3/2011 11:30PM EST: It turns out this version was also written by Hallvard B. Furuseth)**

/* Number of bits in inttype_MAX, or in any (1<<k)-1 where 0 <= k < 2040 */
#define IMAX_BITS(m) ((m)/((m)%255+1) / 255%255*8 + 7-86/((m)%255+12))

**Remember that although the width of an unsigned integer type is equal to the number of value bits, the width of a signed integer type is one greater (ยง6.2.6.2/6).** This is of special importance as in my original comment to your question I had incorrectly stated that the IMAX_BITS() macro calculates the width when it actually calculates the number of value bits. Sorry about that!

So for example IMAX_BITS(INT64_MAX) will create a compile-time constant of 63. However, in this example, we are dealing with a signed type so you must add 1 to account for the sign bit if you want the actual width of an int64_t, which is of course 64.

In a separate comp.lang.c discussion a user named blargg gives a breakdown of how the macro works:
Re: using pre-processor to count bits in integer types…

Note that the macro only works with 2^n-1 values (ie all 1s in binary), as would be expected with any MAX value. Also note that while it is easy to get a compile-time constant for the maximum value of an unsigned integer type (IMAX_BITS((unsigned type)-1)), at the time of this writing I don’t know any way to do the same thing for a signed integer type without invoking implementation-defined behavior. If I ever find out I’ll answer my own related SO question, here:
C question: off_t (and other signed integer types) minimum and maximum values – Stack Overflow

Leave a Comment