Proofs of the uncountability of the reals

Mathematics isn't yet ready to prove results of the form, "Every proof of Theorem T must use Argument A." Think closely about how you might try to prove something like that. You would need to set up some plausible system for mathematics in which Cantor's diagonal argument is blocked and the reals are countable. Nobody has any idea how to do that.

The best you can hope for is to look at each proof on a case-by-case basis and decide, subjectively, whether it is "essentially the diagonal argument in disguise." If you're lucky, you'll run into one that your intuition tells you is a fundamentally different proof, and that will settle the question to your satisfaction. But if that doesn't happen, then the most you'll be able to say is that every known proof seems to you to be the same. As explained above, you won't be able to conclude definitively that every possible argument must use diagonalization.

ADDENDUM (August 2020). Normann and Sanders have a very interesting paper that sheds new light on the uncountability of $\mathbb R$. In particular they study two specific formulations of the uncountability of $\mathbb R$:

$\mathsf{NIN}$: For any $Y:[0,1] \to \mathbb{N}$, there exist $x,y \in [0,1]$ such that $x\ne_{\mathbb{R}} y$ and $Y(x) =_{\mathbb{N}} Y(y)$.

$\mathsf{NBI}$: For any $Y[0,1] \to \mathbb{N}$, either there exist $x,y \in [0,1]$ such that $x\ne_{\mathbb{R}} y$ and $Y(x) =_{\mathbb{N}} Y(y)$, or there exists $N\in\mathbb{N}$ such that $(\forall x\in [0,1])(Y(x) \ne N)$.

One of their results is that a system called ${\mathsf Z}_2^\omega$ does not prove $\mathsf{NIN}$. Their model of $\neg\mathsf{NIN}$ can therefore be interpreted as a situation where the reals are countable! Nevertheless we are still far from showing that Cantor's diagonal argument is needed to prove that the reals are uncountable. A further caveat is that Normann and Sanders argue that the unprovability of $\mathsf{NIN}$ in ${\mathsf Z}_2^\omega$—which might at first sight suggest that $\mathsf{NIN}$ is a strong axiom—is an artificial result, and that the proper framework for studying $\mathsf{NIN}$ and $\mathsf{NBI}$ is what they call a “non-normal scale,” in which $\mathsf{NIN}$ and $\mathsf{NBI}$ are very weak. In particular their paper gives lots of examples of statements that imply $\mathsf{NIN}$ and $\mathsf{NBI}$. I suspect, though, that you'll probably feel that the proofs of those other statements smuggle in Cantor's diagonal argument one way or another.


Mathematical logicians often joke that the diagonal method is the only proof method that we have in logic. This method is the principal idea behind a huge number of fundamental results, among them:

  • The uncountability of the reals.

  • More generally, the fact that the power set $P(X)$ of a set is strictly larger in cardinality.

  • The Russell paradox.

  • The undecidability of the halting problem.

  • The Recursion theorem.

  • More generally, huge parts of computability theory are based on diagonalization, such as uses of the priority method.

  • The fixed-point lemma and its use in proving the Incompleteness theorem.

  • The strictness of the arithmetic hierarchy, the projective hierarchy, etc.

  • Etc. etc. etc.


What about using Lebesgue outer measure? The interval $[0,1]$ has Lebesgue outer measure 1, while countable sets have Lebesgue outer measure $0$.

For the purposes of the proof, I define the Lebesgue outer measure $\mathcal{L}(E)$ of a set $E\subset\mathbb{R}$ as the infimum of the sums $\sum_i (b_i-a_i)$, where $E\subset \bigcup_i (a_i,b_i)$ (e.g. the infimum is over all countable coverings by open intervals).

It is a direct consequence of the definition that any countable set has Lebesgue outer measure 0. This can be even proved in the spirit of Gowers' first suggestion: suppose that $f:\mathbb{Q}\cap (0,1)\to A$ is a bijection. Then, given $\varepsilon>0$, the family $$\{ ( f(p/q)-\varepsilon/q^3, f(p/q)+\varepsilon/q^3): p/q\in [0,1], \text{g.c.d.}(p,q)=1\}$$ is a cover of $A$ by intervals, such that the sum of the lengths is $O(\varepsilon)$.

To prove that $\mathcal{L}([0,1])=1$, the following is the key claim: Let $\{ (a_i,b_i)\}$ be a finite cover of the interval $[c,d]$ with no proper subcover. Then $\sum_i (b_i-a_i) > d-c$.

The claim is proved by induction in the number of elements of the cover. It is clearly true if the cover has just one interval. Now if $[c,d] \subset \bigcup_{i=1}^n (a_i,b_i)$ with $n>1$, then $[c,d]\backslash (a_1,b_1)$ is either a closed interval $I$ or the union $I\cup I'$ of two disjoint closed intervals. In the first case $\bigcup_{i=2}^n (a_i,b_i)$ is a cover of $I$ and we apply the inductive hypothesis to it. Otherwise, $\{(a_i,b_i)\}_{i=2}^n$ can be split into two disjoint subfamilies, one which covers $I$ and one which covers $I'$. We then apply the inductive hypothesis to these families. (We use the property that the original cover has no proper subcover to make sure the covers of $I$ and $I'$ are disjoint.)

Now the claim and compactness of $[0,1]$ (ie. Heine-Borel) yield that $\mathcal{L}([0,1])\ge 1$.

Hence, $[0,1]$ is uncountable and so is $\mathbb{R}$.