Why use Fourier series instead of Taylor?

  1. The complex exponentials are eigenfunctions of the derivative and integral operators. So if you're analyzing linear differential equations, and using Fourier series, then you can consider each term on its own. If you use Taylor series you have to consider interactions between one term and other terms in the series. (This is also why we often write our Fourier series in terms of complex exponentials rather than sines and cosines)

  2. Extrapolation. If I have a function $f(x)$ and I approximate it on a region $[x_1, x_2]$ with a finite-length Taylor series $F_T(x)$, then outside of $[x_1, x_2]$ the Taylor series will tend to go to infinity. If I approximate it with a finite-length Fourier series, the series will remain bounded as $x\to\infty$.


Great question. One reason that complex exponential expansions (which end up turning on sines and cosines for real-valued problems) are more natural that Taylor series expansions is that they don't require picking a special point to expand around. In many situations the differential equation is translationally invariant, and there's no natural point to Taylor expand around, so you need to pick an arbitrary point. A general pattern in physics is that if your problem setup has some symmetry, you definitely want to take advantage of that symmetry in solving it.

Another issue is that, as The Photon mentioned, polynomials inevitably get unboundedly large at large $x$, which doesn't match up with the periodic nature of the solutions. For any finite-order Taylor expansion, you need to manually truncate the solution outside of a single fundamental period, which is a little awkward.

But probably the most important reason is that you are dealing with differential equations, and sines and cosines have the very special property of remaining unchanged (up to a scaling factor) after two derivatives (and the complex exponential version is unchanged up to a scaling factor after even a single derivative). As you gain experience with Fourier transforms, you'll see that this fact allows you to convert many linear differential equations into algebraic ones that are much easier to deal with. By contrast, diffentiating a polynomial takes you down the ladder to a lower-order polynomial, so you never get back to where you started, no many how many derivatives you take. This fact prevents you from taking advantage of that technique.


If you're asking about practical advantage, then you need to forget about everything you learned in analysis class. A practical calculation doesn't care whether something is pointwise or uniformly convergent. In fact, it is often very useful to use asymptotic series, which aren't even pointwise convergent.

Broadly speaking, what makes a series useful is how numerically accurate a result you can get with it, while using only as many terms as is practical. "Rigorous" notions of convergence are not useful here because they talk about the limit of infinitely many terms, which obviously is never attained in practice. (For example, it is in principle true that $\cos(x)$ is described everywhere by its Taylor series, but try calculating $\cos(10^8)$ using that series and see how many terms you need to get a reasonable answer. The Taylor series is completely useless for this task.)

Fourier series are useful in this sense because many phenomena in nature exhibit spatial or temporal translational invariance. In the simplest cases, this renders problems diagonal in Fourier space, allowing you to write down the exact solution in one step. In more complicated cases, you can render the problem almost-diagonal in Fourier space, and treat it perturbatively.