Why is the result of sizeof implementation defined? [closed]

The result of sizeof is implementation defined because the size of the various basic types are implementation defined. The only guarantees we have on the size of the types in C++ is that

sizeof(char) = 1 and sizeof(char) <= sizeof(short) <= sizeof(int) <= 
sizeof(long) <= sizeof(long long)

And that each type has a minimum value it must support C11 [Annex E (informative) Implementation limits]/1

[…]The minimum magnitudes shown shall be replaced by implementation-defined magnitudes with the same sign.[…]

#define CHAR_BIT    8
#define CHAR_MAX    UCHAR_MAX or SCHAR_MAX
#define CHAR_MIN    0 or SCHAR_MIN
#define INT_MAX     +32767
#define INT_MIN     -32767
#define LONG_MAX    +2147483647
#define LONG_MIN    -2147483647
#define LLONG_MAX   +9223372036854775807
#define LLONG_MIN   -9223372036854775807
#define MB_LEN_MAX  1
#define SCHAR_MAX   +127
#define SCHAR_MIN   -127
#define SHRT_MAX    +32767
#define SHRT_MIN    -32767
#define UCHAR_MAX   255
#define USHRT_MAX   65535
#define UINT_MAX    65535
#define ULONG_MAX   4294967295
#define ULLONG_MAX  18446744073709551615

So per the standard a int has to be able to store a number that could be stored in 16 bits but it can be bigger and on most of today’s systems it is 32 bits.

What I’m getting at here is that n * CHAR_BITS is a fixed formula. The formula itself can’t changed between implementations. Yes, an int may be 4 bytes or 8 bytes. I get that. But between all implementations, the value must n * CHAR_BITS.

You are correct but n is defined per C99 §6.2.6.1 as

where n is the size of an object of that type

emphasis mine

So the formula may be fixed but n is not fixed and different implementations on the same system can use a different value of n.

Leave a Comment