Picture is worth a thousand words – try to draw equation f(k)
:
and you will get such XY graph (X and Y are in logarithmic scale).
If computer could represent 32-bit floats without rounding error then for every k
we should get zero. But instead error increases with bigger values of k because of floating point error accumulation.
hth!