Definition of convergence of $\sum_{i=-\infty}^\infty a_i$

The usual definition of convergence for doubly infinite series or $\mathbb{Z}$-indexed series is that

$$\sum_{i = -\infty}^{\infty} a_i\tag{1}$$

is defined as convergent if the series

$$\sum_{i = 0}^{\infty} a_i\quad \text{and}\quad \sum_{k = 1}^{\infty} a_{-k}$$

both converge, and the value of the doubly infinite series is the sum of the values of these two series. For doubly infinite series of functions one then has uniform convergence if the series with nonnegative indices and the series with negative indices both converge uniformly.

We can also formulate the criterion without splitting the series, using the product partial order on $\mathbb{N}^2$ to make it a directed set, and say that $(1)$ converges if

$$\lim_{(m,n) \to (\infty,\infty)} \sum_{i = -m}^n a_i\tag{2}$$

exists. For a doubly infinite series of functions, uniform convergence again means uniform convergence of the net

$$S_{m,n} := \sum_{i = -m}^n a_i.$$

There is an exception, however. For Fourier series

$$\sum_{n = -\infty}^{\infty} c_n e^{inx},$$

when one is interested in pointwise (or uniform) convergence, one usually only considers the symmetric partial sums

$$\sum_{n = -N}^N c_n e^{inx}$$

and calls the Fourier series convergent (at $x$) if the limit of the symmetric partial sums at $x$ exists.

The convergence of a doubly infinite series as defined above evidently implies the convergence of the sequence of symmetric partial sums, but the symmetric partial sums can converge when the doubly infinite series doesn't converge, the limit of the symmetric partial sums is then (often) called the principal value of the divergent doubly infinite series. This is all analogous to the situation of improper Riemann integrals.


If all terms are in $\mathbb R$, the we can proceed as follows. Let \begin{align} I & = \{i : a_i\ge0\}, \\ J & = \{i : a_i < 0\}. \end{align} Then let \begin{align} \sum_{i\in I} a_i & = \sup\left\{ \sum_{i\in I_0} a_i : I_0 \text{ is a finite subset of }I \right\} \tag 1 \\ \sum_{i\in J} a_i & = -\sup\left\{ \sum_{i\in I_0} -a_i : I_0 \text{ is a finite subset of }J \right\} \tag 2 \end{align} and finally $$ \sum_{i=-\infty}^\infty a_i = \sum_{i\in I} a_i + \sum_{i\in J} a_i. \tag 3 $$ This defines the sum except when both $(1)$ and $(2)$ are infinite. The series converges absolutely if both are finite.

One can write $$ \sum_{i=-\infty}^\infty a_i = \lim_{n\to\infty} \sum_{i=-n}^n a_i \tag 4 $$ and in some cases that converges even when $(1)$ and $(2)$ are infinite. But in that case "rearrangements" of the sum, such as $$ \lim_{n\to\infty} \sum_{i=-n}^{2n} a_i $$ sometimes have values differing from that of $(4)$. However, $(4)$ agrees with $(3)$ in all cases where at least one of $(1)$ and $(2)$ is finite.

If $a_i\in\mathbb C$, we can apply all of the above to real and imaginary parts separately in every term, getting the real and imaginary parts of the entire sum.