Rounding floats so that they sum to precisely 1

If you mean to find two values that add up to 1.0

I understand that you want to pick two floating-point numbers between 0.0 and 1.0 such that they add to 1.0.

Do this:

  • pick the largest L of the two. It has to be between 0.5 and 1.0.
  • define the smallest number S as 1.0 – L.

Then in floating-point, S + L is exactly 1.0.


If for some reason you obtain the smallest number S first in your algorithm, compute L = 1.0 – S and then S0 = 1.0 – L. Then L and S0 add up exactly to 1.0. Consider S0 the “rounded” version of S.

If you mean several values X1, X2, …, XN

Here is an alternative solution if you are adding N numbers, each between 0.0 and 1.0, and expect the operations X1 + X2 + … and 1.0 – X1 … to behave like they do in math.

Each time you obtain a new number Xi, do: Xi ← 1.0 – (1.0 – Xi). Only use this new value of Xi from that point onwards. This assignment will slightly round Xi so that it behaves well in all sums whose intermediate results are between 0.0 and 1.0.

EDIT: after doing the above for values X1, …, XN-1, compute XN as 1 – X1 – … – XN-1. This floating-point computation will be exact (despite involving floating-point), so that you will have X1 + … + XN = 1 exactly.

Leave a Comment