Infinite series expansion of $\sin (x)$

There's the way Euler did it. First recall that $$ \sin(\theta_1+\theta_2+\theta_3+\cdots) = \sum_{\text{odd }k \ge 1} (-1)^{(k-1)/2} \sum_{|A| = k}\ \prod_{i\in A} \sin\theta_i\prod_{i\not\in A} \cos\theta_i. $$ Then let $n$ be an infinitely large integer (that's how Euler phrased it, if I'm not mistaken) and let $$ x= \frac{\theta}{n} + \cdots + \frac{\theta}{n} $$ and apply the formula to find $\sin x$. Finally, recall that (as Euler would put it), since $\theta/n$ is infinitely small, $\sin(\theta/n) = \theta/n$ and $\cos(\theta/n) = 1$. Then do a bit of algebra and the series drops out.

The algebra will include things like saying that $$ \frac{n(n-1)(n-2)\cdots(n-k+1)}{n^k} = 1 $$ if $n$ is an infinite integer and $k$ is a finite integer.


This is from Simmons' Calculus. It's in an exercise.

$$ \cos x \leq 1$$ $$ \int_0^x\!\cos t \,\mathrm{d}t\leq \int_0^x\! \,\mathrm{d}t$$ $$ \sin x \leq x$$ $$ \int_0^x\!\sin t \,\mathrm{d}t\leq \int_0^x\! t \,\mathrm{d}t$$ $$ \left.-\cos t\right|_0^x\leq \frac{ x^2}{2}$$ $$ 1-\cos x\leq \frac{ x^2}{2}$$ $$ \cos x\geq 1-\frac{ x^2}{2}$$

Continuing, you see that $\sin x$ is less than its expansion when truncated after progressively higher odd numbers of terms and, in alternation, that $\cos x$ is greater than its expansion truncated after progressively higher even numbers of terms.

I don't have the book in front of me. I think this was intended more to suggest the expansion than to rigorously prove it, but my theoretical understanding isn't quite up to identifying what's lacking or to correcting anything. Still, I thought it was interesting when I saw it and I hope it's relevant.


Here is a mosquito-nuking solution: one can use Lagrangian inversion:

$$f^{(-1)}(x)=\sum_{k=0}^\infty \frac{x^{k+1}}{(k+1)!} \left(\left.\frac{\mathrm d^k}{\mathrm dt^k}\left(\frac{t}{f(t)}\right)^{k+1}\right|_{t=0}\right)$$

and let $f(t)=\arcsin\,t$; probably the only deal-breaker here is that the expressions for the derivatives get progressively unwieldy. However, if one takes limits as $t\to 0$ for these derivatives, one recovers the familiar sequence $1,0,-1,0,1,\dots$.


There is a version of Lagrange inversion that uses the coefficients of the original power series instead of the function itself. Mathematica natively supports this operation through the InverseSeries[] construction, but here is an implementation of one of the simpler algorithms for series reversion, due to Henry Thacher:

a = Rest[CoefficientList[Series[ArcSin[x], {x, 0, 20}], x]];
n = Length[a];
Do[
    Do[
      c[i, j + 1] = Sum[c[k, 1]c[i - k, j], {k, 1, i - j}];
      , {j, i - 1, 1, -1}];
    c[i, 1] = Boole[i == 1] - Sum[a[[j]] c[i, j], {j, 2, i}]
    , {i, n}];
Table[c[i, 1], {i, n}]

and then compare with the output of Rest[CoefficientList[Series[Sin[x], {x, 0, 20}], x]].

Other methods, including a modification of Newton's method for series, have been presented, but I won't get into them here.