How can I write a power function myself?
Typically the implementation of the pow(double, double)
function in math libraries is based on the identity:
pow(x,y) = pow(a, y * log_a(x))
Using this identity, you only need to know how to raise a single number a
to an arbitrary exponent, and how to take a logarithm base a
. You have effectively turned a complicated multi-variable function into a two functions of a single variable, and a multiplication, which is pretty easy to implement. The most commonly chosen values of a
are e
or 2
-- e
because the e^x
and log_e(1+x)
have some very nice mathematical properties, and 2
because it has some nice properties for implementation in floating-point arithmetic.
The catch of doing it this way is that (if you want to get full accuracy) you need to compute the log_a(x)
term (and its product with y
) to higher accuracy than the floating-point representation of x
and y
. For example, if x
and y
are doubles, and you want to get a high accuracy result, you'll need to come up with some way to store intermediate results (and do arithmetic) in a higher-precision format. The Intel x87 format is a common choice, as are 64-bit integers (though if you really want a top-quality implementation, you'll need to do a couple of 96-bit integer computations, which are a little bit painful in some languages). It's much easier to deal with this if you implement powf(float,float)
, because then you can just use double
for intermediate computations. I would recommend starting with that if you want to use this approach.
The algorithm that I outlined is not the only possible way to compute pow
. It is merely the most suitable for delivering a high-speed result that satisfies a fixed a priori accuracy bound. It is less suitable in some other contexts, and is certainly much harder to implement than the repeated-square[root]-ing algorithm that some others have suggested.
If you want to try the repeated square[root] algorithm, begin by writing an unsigned integer power function that uses repeated squaring only. Once you have a good grasp on the algorithm for that reduced case, you will find it fairly straightforward to extend it to handle fractional exponents.
Negative powers are not a problem, they're just the inverse (1/x
) of the positive power.
Floating point powers are just a little bit more complicated; as you know a fractional power is equivalent to a root (e.g. x^(1/2) == sqrt(x)
) and you also know that multiplying powers with the same base is equivalent to add their exponents.
With all the above, you can:
- Decompose the exponent in a integer part and a rational part.
- Calculate the integer power with a loop (you can optimise it decomposing in factors and reusing partial calculations).
- Calculate the root with any algorithm you like (any iterative approximation like bisection or Newton method could work).
- Multiply the result.
- If the exponent was negative, apply the inverse.
Example:
2^(-3.5) = (2^3 * 2^(1/2)))^-1 = 1 / (2*2*2 * sqrt(2))
AB = Log-1(Log(A)*B)
Edit: yes, this definition really does provide something useful. For example, on an x86, it translates almost directly to FYL2X
(Y * Log2(X)) and F2XM1
(2x-1):
fyl2x
fld st(0)
frndint
fsubr st(1),st
fxch st(1)
fchs
f2xmi
fld1
faddp st(1),st
fscale
fstp st(1)
The code ends up a little longer than you might expect, primarily because F2XM1
only works with numbers in the range -1.0..1.0. The fld st(0)/frndint/fsubr st(1),st
piece subtracts off the integer part, so we're left with only the fraction. We apply F2XM1
to that, add the 1 back on, then use FSCALE
to handle the integer part of the exponentiation.