Prove: The positive integers cannot be partitioned into arithmetic sequences (using Complex Analysis)

Suppose you could partion the positive integers into disjoint arithmetic progressions $$a_1, a_1 + d_1, a_1 + 2d_1,\ldots$$ $$a_2, a_2 + d_2, a_2 + 2d_2,\ldots$$ $$\ldots$$ $$a_n, a_n + d_n, a_n + 2d_n,\ldots$$

Then the power series $\sum z^n$ can be rewritten as $$ \frac{z}{1-z} = \sum_{k=1}^\infty z^k = \sum_{j=1}^n\sum_{k=0}^\infty z^{a_j+kd_j} = \sum_{j=1}^n z^{a_j}\frac{1}{1-z^{d_j}},$$ which is the form you want it in. These series all converge absolutely and locally uniformly in the unit disk. Thus you can conclude that $$\frac{z}{1-z} = \sum_{j=1}^n\frac{z^{a_j}}{1 - z^{d_j}}.$$ The only pole of the left hand side is $z = 1$. If $J$ is the unique index such that $d_J$ is maximal, then the $j = J$ term on the right hand side has poles at the $d_J$th roots of unity, and no other term contributes poles at the primitive $d_J$th roots of unity. This is a contradiction if $d_J\neq 1$, that is, if you haven't partitioned the positive integers with the trivial partition.


This is a pretty problem. I asked a question about it in MathOverflow a while back. Here is the solution using complex analysis, copying from the link above, and adding some details: (I've added the details in between square brackets. Ignore those paragraphs if you want to work out the details on your own.)

Assign to each progression $A_i=(a_i+kb_i\mid k\in{\mathbb N})$, $1\le i\le n$, its generating series, $f_i(x)=\sum_{k=0}^\infty x^{a_i+kb_i}$. Then $f_i(x)=x^{a_i}/(1-x^{b_i})$. Note the series converges for $|x|<1$.

[ To see this: $\sum_{k=0}^\infty x^{a_i+kb_i}=x^{a_i}\sum_{k=0}^\infty (x^{b_i})^k$, and use that if $|t|<1$, the geometric series $\sum_{k=0}^\infty t^k$ equals $1/(1-t)$. ]

Now, since the $A_i$ partition ${\mathbb N}$, we have $\sum_{i=1}^n f_i(x)=\sum_{m=1}^\infty x^m=x/(1-x)$.

[ For this, observe that if $m\in A_i$, then $x^m$ is one of the terms being added in $f_i(x)$, and $x^m$ is not a term in any of the other series $f_j$ for $j\ne i$. ]

If all the $b_i$ are different, let $b$ be the largest, and fix a primitive $b$-th root of unity $\zeta$. Now let $x\to\zeta$ to reach a contradiction.

[ In detail, $\zeta=e^{2\pi i/b}$ is a primitive $b$-th root of unity ($i$ the square root of $-1$). This means that $\zeta^b=1$, if $k$ is an integer, then $\zeta^k=1$ iff $k$ is a multiple of $b$, and if $z^b=1$, then $z=\zeta^n$ for some integer $n$. You then have that $$\frac{x^{a_1}}{1-x^{b_1}}+\dots+\frac{x^{a_n}}{1-x^{b_n}}=f_1(x)+f_2(x)+\dots f_n(x)=\sum x^m=\frac x{1-x}. $$ Call $(*)$ this equation. When $x\to\zeta$, we obtain on the right hand side of $(*)$ the fraction $\displaystyle \frac{\zeta}{1-\zeta}$. Note here that $\zeta\ne 1$ because we are assuming we have at least two arithmetic progressions, so $b>1$. On the other hand, if $b_j\ne b$, then $$ \frac{x^{a_j}}{1-x^{b_j}}\to\frac{\zeta^{a_j}}{1-\zeta^{b_j}} $$ and $\zeta^{b_j}\ne 1$ because $b>b_j$. But if $b_j=b$, then $\displaystyle \frac{x^{a_j}}{1-x^{b_j}}\to\infty$ because $\zeta^{b_j}=\zeta^b=1$. The contradiction is that the left hand side of $(*)$ converges to $\infty$, while the right hand side does not. ]

This shows that the largest of the common differences must appear at least twice.

As Gerry Myerson pointed out in MO, this argument is due to D J Newman. It appears in his book A Problem Seminar, problem 90, on page 18, with solution on page 100.


Given an arithmetic progression $a,a+d,a+2d,\dots$, you write down its generating function, $z^a+z^{a+d}+z^{a+2d}+\dots$ which you recognize as a geometric series, so you sum the geometric series and get a function with certain poles.

The generating function for the full set of positive integers is $z+z^2+z^3+\dots$, which you recognize as a geometric series, etc., etc.

Partitioning the positive integers into arithmetic progressions is writing the generating function for the positive integers as the sum of generating functions for the arithmetic progressions. The desired conclusion comes from thinking about the poles on each side of the equation.

This proof is due to D J Newman.