In which order should floats be added to get the most precise result?

I tried out the extreme example in the answer supplied by Steve Jessop.

#include <iostream>
#include <iomanip>
#include <cmath>

int main()
{
    long billion = 1000000000;
    double big = 1.0;
    double small = 1e-9;
    double expected = 2.0;

    double sum = big;
    for (long i = 0; i < billion; ++i)
        sum += small;
    std::cout << std::scientific << std::setprecision(1) << big << " + " << billion << " * " << small << " = " <<
        std::fixed << std::setprecision(15) << sum <<
        "    (difference = " << std::fabs(expected - sum) << ")" << std::endl;

    sum = 0;
    for (long i = 0; i < billion; ++i)
        sum += small;
    sum += big;
    std::cout  << std::scientific << std::setprecision(1) << billion << " * " << small << " + " << big << " = " <<
        std::fixed << std::setprecision(15) << sum <<
        "    (difference = " << std::fabs(expected - sum) << ")" << std::endl;

    return 0;
}

I got the following result:

1.0e+00 + 1000000000 * 1.0e-09 = 2.000000082740371    (difference = 0.000000082740371)
1000000000 * 1.0e-09 + 1.0e+00 = 1.999999992539933    (difference = 0.000000007460067)

The error in the first line is more than ten times bigger in the second.

If I change the doubles to floats in the code above, I get:

1.0e+00 + 1000000000 * 1.0e-09 = 1.000000000000000    (difference = 1.000000000000000)
1000000000 * 1.0e-09 + 1.0e+00 = 1.031250000000000    (difference = 0.968750000000000)

Neither answer is even close to 2.0 (but the second is slightly closer).

Using the Kahan summation (with doubles) as described by Daniel Pryden:

#include <iostream>
#include <iomanip>
#include <cmath>

int main()
{
    long billion = 1000000000;
    double big = 1.0;
    double small = 1e-9;
    double expected = 2.0;

    double sum = big;
    double c = 0.0;
    for (long i = 0; i < billion; ++i) {
        double y = small - c;
        double t = sum + y;
        c = (t - sum) - y;
        sum = t;
    }

    std::cout << "Kahan sum  = " << std::fixed << std::setprecision(15) << sum <<
        "    (difference = " << std::fabs(expected - sum) << ")" << std::endl;

    return 0;
}

I get exactly 2.0:

Kahan sum  = 2.000000000000000    (difference = 0.000000000000000)

And even if I change the doubles to floats in the code above, I get:

Kahan sum  = 2.000000000000000    (difference = 0.000000000000000)

It would seem that Kahan is the way to go!


Your instinct is basically right, sorting in ascending order (of magnitude) usually improves things somewhat. Consider the case where we're adding single-precision (32 bit) floats, and there are 1 billion values equal to 1 / (1 billion), and one value equal to 1. If the 1 comes first, then the sum will come to 1, since 1 + (1 / 1 billion) is 1 due to loss of precision. Each addition has no effect at all on the total.

If the small values come first, they will at least sum to something, although even then I have 2^30 of them, whereas after 2^25 or so I'm back in the situation where each one individually isn't affecting the total any more. So I'm still going to need more tricks.

That's an extreme case, but in general adding two values of similar magnitude is more accurate than adding two values of very different magnitudes, since you "discard" fewer bits of precision in the smaller value that way. By sorting the numbers, you group values of similar magnitude together, and by adding them in ascending order you give the small values a "chance" of cumulatively reaching the magnitude of the bigger numbers.

Still, if negative numbers are involved it's easy to "outwit" this approach. Consider three values to sum, {1, -1, 1 billionth}. The arithmetically correct sum is 1 billionth, but if my first addition involves the tiny value then my final sum will be 0. Of the 6 possible orders, only 2 are "correct" - {1, -1, 1 billionth} and {-1, 1, 1 billionth}. All 6 orders give results that are accurate at the scale of the largest-magnitude value in the input (0.0000001% out), but for 4 of them the result is inaccurate at the scale of the true solution (100% out). The particular problem you're solving will tell you whether the former is good enough or not.

In fact, you can play a lot more tricks than just adding them in sorted order. If you have lots of very small values, a middle number of middling values, and a small number of large values, then it might be most accurate to first add up all the small ones, then separately total the middling ones, add those two totals together then add the large ones. It's not at all trivial to find the most accurate combination of floating-point additions, but to cope with really bad cases you can keep a whole array of running totals at different magnitudes, add each new value to the total that best matches its magnitude, and when a running total starts to get too big for its magnitude, add it into the next total up and start a new one. Taken to its logical extreme, this process is equivalent to performing the sum in an arbitrary-precision type (so you'd do that). But given the simplistic choice of adding in ascending or descending order of magnitude, ascending is the better bet.

It does have some relation to real-world programming, since there are some cases where your calculation can go very badly wrong if you accidentally chop off a "heavy" tail consisting of a large number of values each of which is too small to individually affect the sum, or if you throw away too much precision from a lot of small values that individually only affect the last few bits of the sum. In cases where the tail is negligible anyway you probably don't care. For example if you're only adding together a small number of values in the first place and you're only using a few significant figures of the sum.


There is also an algorithm designed for this kind of accumulation operation, called Kahan Summation, that you should probably be aware of.

According to Wikipedia,

The Kahan summation algorithm (also known as compensated summation) significantly reduces the numerical error in the total obtained by adding a sequence of finite precision floating point numbers, compared to the obvious approach. This is done by keeping a separate running compensation (a variable to accumulate small errors).

In pseudocode, the algorithm is:

function kahanSum(input)
 var sum = input[1]
 var c = 0.0          //A running compensation for lost low-order bits.
 for i = 2 to input.length
  y = input[i] - c    //So far, so good: c is zero.
  t = sum + y         //Alas, sum is big, y small, so low-order digits of y are lost.
  c = (t - sum) - y   //(t - sum) recovers the high-order part of y; subtracting y recovers -(low part of y)
  sum = t             //Algebraically, c should always be zero. Beware eagerly optimising compilers!
 next i               //Next time around, the lost low part will be added to y in a fresh attempt.
return sum