Why isn’t there int128_t?

I’ll refer to the C standard; I think the C++ standard inherits the rules for <stdint.h> / <cstdint> from C.

I know that gcc implements 128-bit signed and unsigned integers, with the names __int128 and unsigned __int128 (__int128 is an implementation-defined keyword) on some platforms.

Even for an implementation that provides a standard 128-bit type, the standard does not require int128_t or uint128_t to be defined. Quoting section 7.20.1.1 of the N1570 draft of the C standard:

These types are optional. However, if an implementation provides
integer types with widths of 8, 16, 32, or 64 bits, no padding bits,
and (for the signed types) that have a two’s complement
representation, it shall define the corresponding typedef names.

C permits implementations to defined extended integer types whose names are implementation-defined keywords. gcc’s __int128 and unsigned __int128 are very similar to extended integer types as defined by the standard — but gcc doesn’t treat them that way. Instead, it treats them as a language extension.

In particular, if __int128 and unsigned __int128 were extended integer types, then gcc would be required to define intmax_t and uintmax_t as those types (or as some types at least 128 bits wide). It does not do so; instead, intmax_t and uintmax_t are only 64 bits.

This is, in my opinion, unfortunate, but I don’t believe it makes gcc non-conforming. No portable program can depend on the existence of __int128, or on any integer type wider than 64 bits. And changing intmax_t and uintmax_t would cause serious ABI compatibility problems.

Leave a Comment