Understanding GCC’s floating point constants in assembly listing output

But how, exactly? …

Yes, this is an integer representation of the IEEE754 binary64 (aka double) bit pattern. GCC always prints FP constant this way because they are sometimes the result of constant-propagation, not FP literals that appear in the source. (Also it avoids any dependence on FP rounding in the assembler.)

gcc always uses decimal for integer constants in its asm output, which is pretty inconvenient for humans. (On the Godbolt compiler explorer, use the mouseover tooltip to get hex for any number.)

Clang’s asm output is nicer, and includes a comment with the decimal value of the number:

    .quad   -4605718748921121997    # double -5.2999999999999998

In what order?

x86’s float endianness matches its integer endianness: both are little-endian. (It’s possible for this not to be the case, but all the modern mainstream architectures use the same endianness for integer and float, either big or little. Floating point Endianness?. And Endianness for floating point.)

So when loaded as a 64-bit IEEE-754 double, the low 32 bits in memory are the low 32 bits of the double.

As @MichaelPetch explains in comments, the first/low dword is 0x33333333, and the second/high dword is 0xC0153333. Thus the entire double has a bit-pattern of C015333333333333

For single-precision float, there’s https://www.h-schmidt.net/FloatConverter/IEEE754.html. (It’s pretty nice, it breaks down the bits into binary with checkboxes, as well as hex bit-pattern and decimal fraction. Great for learning about how FP exponent / significand works.)

For double-precision as well, see https://babbage.cs.qc.cuny.edu/IEEE-754.old/64bit.html. You can put in a bit-pattern and see the hex value.

Leave a Comment