Can someone explain this: 0.2 + 0.1 = 0.30000000000000004? [duplicate]

That’s because .1 cannot be represented exactly in a binary floating point representation. If you try

>>> .1

Python will respond with .1 because it only prints up to a certain precision, but there’s already a small round-off error. The same happens with .3, but when you issue

>>> .2 + .1
0.30000000000000004

then the round-off errors in .2 and .1 accumulate. Also note:

>>> .2 + .1 == .3
False

Leave a Comment