Denormalized Numbers – IEEE 754 Floating Point

Essentially, a denormalized float has the ability to represent the
SMALLEST (in magnitude) number that is possible to be represented with
any floating point value.

That is correct.

using denormalized numbers comes with a performance cost on many platforms

The penalty is different on different processors, but it can be up to 2 orders of magnitude. The reason? The same as for this advice:

one should “avoid overlap between normalized and denormalized numbers”

Here’s the key: denormals are a fixed-point “micro-format” within the IEEE-754 floating-point format. In normal numbers, the exponent indicates the position of the binary point. Denormal numbers contain the last 52 bits in the fixed-point notation with an exponent of 2-1074 for doubles.

So, denormals are slow because they require special handling. In practice, they occur very rarely, and chip makers don’t like to spend too many valuable resources on rare cases.

Mixing denormals with normals is slow because then you’re mixing formats and you have the additional step of converting between the two.

I guess I just keep getting the impression that using denormalized
numbers turns out to not be a good thing in most cases?

Denormals were created for one primary purpose: gradual underflow. It’s a way to keep the relative difference between tiny numbers small. If you go straight from the smallest normal number to zero (abrupt underflow), the relative change is infinite. If you go to denormals on underflow, the relative change is still not fully accurate, but at least more reasonable. And that difference shows up in calculations.

To put it a different way. Floating-point numbers are not distributed uniformly. There are always the same amount of numbers between successive powers of two: 252 (for double precision). So without denormals, you always end up with a gap between 0 and the smallest floating-point number that is 252 times the size of the difference between the smallest two numbers. Denormals fill this gap uniformly.

As an example about the effects of abrupt vs. gradual underflow, look at the mathematically equivalent x == y and x - y == 0. If x and y are tiny but different and you use abrupt underflow, then if their difference is less than the minimum cutoff value, their difference will be zero, and so the equivalence is violated.

With gradual underflow, the difference between two tiny but different normal numbers gets to be a denormal, which is still not zero. The equivalence is preserved.

So, using denormals on purpose is not advised, because they were designed only as a backup mechanism in exceptional cases.

Leave a Comment