What is the time complexity of computing sin(x) to t bits of precision?

$\pi$ can be computed with the hexadecimal BBP series, though apparently there are faster known ways to compute all of the bits to some level.

Knuth attributes to Brent JACM 23, 242 (1976) the result that $\log$, $\exp$, and $\arctan$ can be computed to $n$ significant bits in $O(M(n)\log n)$ where $M(n)$ is the complexity of multiplication for $n$-bit numbers. For more recent information the citations of this are probably a good bet.


For state-of-the-art arithmetic algorithms, I'd recommend this book (a work in progress), available online, from Brent and Zimmermann: http://www.loria.fr/~zimmerma/mca/pub226.html See chapter 4.

As Steve points out, log, exp, and trig functions are $O(M(n) \log n)$ (in fact they're all calculated from log), where $M(n)$ is the cost of multiplication (theoretically $O(n\log n 2^{\log^*n})$ by Furer's algorithm) and $n$ is the number of bits accuracy. Pi also falls into this complexity. This is just theoretically, however; for less than a billion digits other algorithms with worse asymptotics are faster.

Erf is apparently harder, the book gives some algorithms based on power series and continued fractions, but it avoids giving an explicit computational cost as there are different convergence speeds in different regions.


I'll let others reply with the theoretical results.

In practice however, production systems (like Maple, Mathematica, GMP, etc) all use polyalgorithms (sometimes called hybrid algorithms) which are a collection of methods. Taking erf as an example, for real input, asymptotic series are used for large (in absolute value) inputs, and a Chebyshev-Pade approximant for small values. For complex input, the space is similarly divided up in regions, and a variety of methods can be used -- Laurent series, Puiseux series, Chebyshev-Pade approximants, continued fractions, etc. Range-reduction and reflection techniques are also quite common. So the answer for "What is the time complexity of XXX?" is really "it depends". In other words, in practice, production systems can frequently compute answers that seem to defy the theoretical results because the theoretical results are based on uniform algorithms while the practical implementations are based on adaptive algorithms.

But it looks like some very recent work on numerical computations for D-finite functions may well turn all of that on its head. See NumGfun: a Package for Numerical and Analytic Computation with D-finite Functions by Marc Mezzarobba [the full code is available as part of the gfun library].