Fast Hypotenuse Algorithm for Embedded Processor?

One possibility looks like this:

#include <math.h>

/* Iterations   Accuracy
 *  2          6.5 digits
 *  3           20 digits
 *  4           62 digits
 * assuming a numeric type able to maintain that degree of accuracy in
 * the individual operations.
 */
#define ITER 3

double dist(double P, double Q) {
/* A reasonably robust method of calculating `sqrt(P*P + Q*Q)'
 *
 * Transliterated from _More Programming Pearls, Confessions of a Coder_
 * by Jon Bentley, pg. 156.
 */

    double R;
    int i;

    P = fabs(P);
    Q = fabs(Q);

    if (P<Q) {
        R = P;
        P = Q;
        Q = R;
    }

/* The book has this as:
 *  if P = 0.0 return Q; # in AWK
 * However, this makes no sense to me - we've just insured that P>=Q, so
 * P==0 only if Q==0;  OTOH, if Q==0, then distance == P...
 */
    if ( Q == 0.0 )
        return P;

    for (i=0;i<ITER;i++) {
        R = Q / P;
        R = R * R;
        R = R / (4.0 + R);
        P = P + 2.0 * R * P;
        Q = Q * R;
    }
    return P;
}

This still does a couple of divides and four multiples per iteration, but you rarely need more than three iterations (and two is often adequate) per input. At least with most processors I've seen, that'll generally be faster than the sqrt would be on its own.

For the moment it's written for doubles, but assuming you've implemented the basic operations, converting it to work with fixed point shouldn't be terribly difficult.

Some doubts have been raised by the comment about "reasonably robust". At least as originally written, this was basically a rather backhanded way of saying that "it may not be perfect, but it's still at least quite a bit better than a direct implementation of the Pythagorean theorem."

In particular, when you square each input, you need roughly twice as many bits to represent the squared result as you did to represent the input value. After you add (which needs only one extra bit) you take the square root, which gets you back to needing roughly the same number of bits as the inputs. Unless you have a type with substantially greater precision than the inputs, it's easy for this to produce really poor results.

This algorithm doesn't square either input directly. It is still possible for an intermediate result to underflow, but it's designed so that when it does so, the result still comes out as well as the format in use supports. Basically, the situation in which it happens is that you have an extremely acute triangle (e.g., something like 90 degrees, 0.000001 degrees, and 89.99999 degrees). If it's close enough to 90, 0, 90, we may not be able to represent the difference between the two longer sides, so it'll compute the hypotenuse as being the same length as the other long side.

By contrast, when the Pythagorean theorem fails, the result will often be a NaN (i.e., tells us nothing) or, depending on the floating point format in use, quite possibly something that looks like a reasonable answer, but is actually wildly incorrect.


Consider using CORDIC methods. Dr. Dobb's has an article and associated library source here. Square-root, multiply and divide are dealt with at the end of the article.


If the result doesn't have to be particularly accurate, you can get a crude approximation quite simply:

Take absolute values of a and b, and swap if necessary so that you have a <= b. Then:

h = ((sqrt(2) - 1) * a) + b

To see intuitively how this works, consider the way that a shallow angled line is plotted on a pixel display (e.g. using Bresenham's algorithm). It looks something like this:

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| | | | | | | | | | | | | | | | |*|*|*|    ^
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+    |
| | | | | | | | | | | | |*|*|*|*| | | |    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+    |
| | | | | | | | |*|*|*|*| | | | | | | | a pixels
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+    |
| | | | |*|*|*|*| | | | | | | | | | | |    |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+    |
|*|*|*|*| | | | | | | | | | | | | | | |    v
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
 <-------------- b pixels ----------->

For each step in the b direction, the next pixel to be plotted is either immediately to the right, or one pixel up and to the right.

The ideal line from one end to the other can be approximated by the path which joins the centre of each pixel to the centre of the adjacent one. This is a series of a segments of length sqrt(2), and b-a segments of length 1 (taking a pixel to be the unit of measurement). Hence the above formula.

This clearly gives an accurate answer for a == 0 and a == b; but gives an over-estimate for values in between.

The error depends on the ratio b/a; the maximum error occurs when b = (1 + sqrt(2)) * a and turns out to be 2/sqrt(2+sqrt(2)), or about 8.24% over the true value. That's not great, but if it's good enough for your application, this method has the advantage of being simple and fast. (The multiplication by a constant can be written as a sequence of shifts and adds.)


For the record, here are a few more approximations, listed in roughly increasing order of complexity and accuracy. All these assume 0 ≤ a ≤ b.

  • h = b + 0.337 * a // max error ≈ 5.5 %
  • h = max(b, 0.918 * (b + (a>>1))) // max error ≈ 2.6 %
  • h = b + 0.428 * a * a / b // max error ≈ 1.04 %

Edit: to answer Ecir Hana's question, here is how I derived these approximations.

First step. Approximating a function of two variables can be a complex problem. Thus I first transformed this into the problem of approximating a function of one variable. This can be done by choosing the longest side as a “scale” factor, as follows:

h = √(b2 + a2)
   = b √(1 + (a/b)2)
   = b f(a/b)    where f(x) = √(1+x2)

Adding the constraint 0 ≤ a ≤ b means we are only concerned with approximating f(x) in the interval [0, 1].

Below is the plot of f(x) in the relevant interval, together with the approximation given by Matthew Slattery (namely (√2−1)x + 1).

Function to approximate

Second step. Next step is to stare at this plot, while asking yourself the question “how can I approximate this function cheaply?”. Since the curve looks roughly parabolic, my first idea was to use a quadratic function (third approximation). But since this is still relatively expensive, I also looked at linear and piecewise linear approximations. Here are my three solutions:

Three approximations

The numerical constants (0.337, 0.918 and 0.428) were initially free parameters. The particular values were chosen in order to minimize the maximum absolute error of the approximations. The minimization could certainly be done by some algorithm, but I just did it “by hand”, plotting the absolute error and tuning the constant until it is minimized. In practice this works quite fast. Writing the code to automate this would have taken longer.

Third step is to come back to the initial problem of approximating a function of two variables:

  • h ≈ b (1 + 0.337 (a/b)) = b + 0.337 a
  • h ≈ b max(1, 0.918 (1 + (a/b)/2)) = max(b, 0.918 (b + a/2))
  • h ≈ b (1 + 0.428 (a/b)2) = b + 0.428 a2/b

Tags:

C

Embedded

Avr