Use float or decimal for accounting application dollar amount?

Should Float or Decimal data type be used for dollar amounts?

The answer is easy. Never floats. NEVER!

Floats were according to IEEE 754 always binary, only the new standard IEEE 754R defined decimal formats. Many of the fractional binary parts can never equal the exact decimal representation.

Any binary number can be written as m/2^n (m, n positive integers), any decimal number as m/(2^n*5^n).
As binaries lack the prime factor 5, all binary numbers can be exactly represented by decimals, but not vice versa.

0.3 = 3/(2^1 * 5^1) = 0.3

0.3 = [0.25/0.5] [0.25/0.375] [0.25/3.125] [0.2825/3.125]

          1/4         1/8         1/16          1/32

So you end up with a number either higher or lower than the given decimal number. Always.

Why does that matter? Rounding.

Normal rounding means 0..4 down, 5..9 up. So it does matter if the result is
either 0.049999999999…. or 0.0500000000… You may know that it means 5 cent, but the the computer does not know that and rounds 0.4999… down (wrong) and 0.5000… up (right).

Given that the result of floating point computations always contain small error terms, the decision is pure luck. It gets hopeless if you want decimal round-to-even handling with binary numbers.

Unconvinced? You insist that in your account system everything is perfectly ok?
Assets and liabilities equal? Ok, then take each of the given formatted numbers of each entry, parse them and sum them with an independent decimal system!

Compare that with the formatted sum. Oops, there is something wrong, isn’t it?

For that calculation, extreme accuracy and fidelity was required (we used Oracle’s
FLOAT) so we could record the “billionth’s of a penny” being accured.

It doesn’t help against this error. Because all people automatically assume that the computer sums right, and practically no one checks independently.

Leave a Comment