How much larger is the powerset of a transfinite set?

For infinite sets, $A$ having larger cardinality than $B$ already implies that $A$ is "much larger," and perhaps a good way to see this is to think of all the ways of making $B$ "larger" which don't increase its cardinality. E.g. taking the union with any set of equal or smaller cardinality, taking the Cartesian product with a set of equal or smaller cardinality, etc.

Perhaps you want to know if there are a lot of other cardinalities in between those of $B$ and $2^B$. This question can't be answered with the usual axioms of set theory. For more information, read about the generalized continuum hypothesis.

Edit: In your extended question, you mention that the usual argument only guarantees that the power set $2^A$ has "one more" element than the original set $A$: given a map $\phi : A \to 2^A$, there is at least one element not in the image of $\phi$. Call that element $x$. Certainly there is a bijection between $A$ and $A \cup \{x\}$, so repeating your argument shows there isn't a bijection between $A \cup \{x\}$ and $2^A$; at least one element is missed. So in fact $2^A$ has "two more" elements than $A$. Repeating this, one sees there are infinitely many elements missed. Moreover, by showing that $A \times A$ cannot be mapped onto $2^A$, there are at least "$|A|$ more" elements in $2^A$, and so on.

This is what I was trying to get at in my first paragraph. Of course, it has to be understood in the context that adding one element to an infinite set doesn't actually make it any larger in the sense of cardinality. To my mind, cardinality is a very crude measure of the "size" of an infinite set, and so any operation that enlarges it to an extent that it can be detected by cardinality must be a very dramatic enlargement indeed.


As you probably know, $\aleph_1$ is (by definition) the first uncountable cardinality, and Cantor's continuum hypothesis is that $2^{\aleph_0} = \aleph_1$. For any ordinal $\alpha$, $\aleph_\alpha$ is defined by recursion ($\aleph_{\alpha + 1}$ is the first cardinality greater than $\aleph_\alpha$, and if $\alpha$ is a limit ordinal, then $\aleph_\alpha$ is the supremum of $\aleph_\beta$ taken over all $\beta < \alpha$). The generalized continuum hypothesis (GCH) is that $2^{\aleph_\alpha} = \aleph_{\alpha + 1}$.

Under the standard ZFC axioms, there is a lot of leeway in what $2^{\aleph_\alpha}$ might be: assuming that ZFC is consistent, it is consistent to assume GCH holds, and it is also consistent to assume it fails dramatically, as Nate has already answered. (For just how much leeway one has, see Easton's theorem.)

So as to how vastly larger $2^{\aleph_0}$ is than $\aleph_0$, one can't be too definitive within the confines of ZFC. But it may be possible to give a glimmer, even under the most conservative assumption (continuum hypothesis), that $\aleph_1$ is in some sense far, far larger than $\aleph_0$. Nate has already given some suggestions; an alternate suggestion is to get your students to take a tour through the land of countable ordinals, and get a feel for how indescribably large they can get (as ordinals). Then, $\aleph_1$ is bigger than all of them: it can be defined as the supremum of all ordinals of countable cardinality, hence greater than any of them.

For a gentle introduction to sizes of countable ordinals, you might try John Baez's This Week's Finds, Number 236. My own favorite description involves consideration of fixed points of certain order-preserving maps on $\aleph_1$, as touched upon in this article by Hilbert Levitz. Your students might enjoy trying to outdo each other in describing really whoppingly large countable ordinals, and using these to describe really super-fast growing functions on the integers, as described for example here. It's loads of fun.


Edit: I had meant to mention this but then forgot. Another fun thing for your students to think about with regard to the continuum hypothesis is Freiling's argument that the continuum is larger than $\aleph_1$. It's just an intuitive argument, really, but some mathematicians (notably David Mumford) seem to take it seriously.


Edit 2: I've just seen P. Brooks' most recent edit, and maybe I can help. Explain to your students that there is an explicit bijection between subsets of $\mathbb{N}$ and binary sequences, as per Jason's answer. Then, explain to your students that there is also an explicit bijection between binary sequences and hexadecimal sequences. (You see it, right?) So if there exists a bijection between $\mathbb{N}$ and $P(\mathbb{N})$, there would exist an explicit bijection between $\mathbb{N}$ and hexadecimal sequences. Then the same proof, that the existence of a bijection between $\mathbb{N}$ and decimal expansions leads to a contradiction, can be adapted with "hexadecimal" in place of "decimal". Thus, there are clearly "vastly more" hexadecimal expansions than there are natural numbers.

One thing to point out here is that the proof "based on Russell's paradox" really is a diagonalization argument in disguise! (Historically, what I think what happened is that Russell applied the idea behind Cantor's diagonalization to devise his paradox, to show the inconsistency in Frege's system.) Given a putative bijection $\phi$ from $\mathbb{N}$ to the set of binary sequences $2^{\mathbb{N}}$, one shows that the sequence $x_n = 1 - \phi(n)(n)$, obtained from the diagonal $\phi(n)(n)$ by changing all 0's to 1's and all 1's to 0's, is not in the image of $\phi$. That's the diagonalization argument. But it's morally the same as the Russell-paradox style argument where a putative bijection $\psi: \mathbb{N} \to P(\mathbb{N})$ is considered and one forms $\{n: n \notin \psi(n)\}$; the role of the negation (changing $\in$ to $\notin$) is isomorphic to the role played by changing 0's to 1's and 1's to 0's (thinking of 0's and 1's as the truth values "false" and "true").


After being rather confused for a while at your edit, I think I finally now see the issue. Your point (I hope?) is that when diagonalizing against decimal expansions of real numbers, you have flexibility to choose each digit. That is, in ensuring that the $n$th digit of your "unlisted" decimal expansion differs from the $n$th digit of the $n$th element of the list, you can choose one of nine digits. However, in what you refer to the Russell paradox argument, your hands are tied. You aren't making any choices and there's just a single set it spits back at you. This makes it feel like the argument isn't giving you as much information about the number of unlisted elements of the power set.

As indicated by the responses, this intuition is not accurate. In fact, the arguments are essentially identical, but it might take a bit of familiarity with cardinal arithmetic to see it. The simplest way I can see to imbue the general powerset argument with a similar amount of flexibility is through some basic cardinal arithmetic. For each infinite aleph $\kappa$ it's not too hard to show that the disjoint union of two copies of $\kappa$ can be put into bijection with $\kappa$. Once you do this, you can iterate to get that the disjoint union of 10 copies of $\kappa$ (which I'll denote by $10\kappa$ for now) is again equinumerous with $\kappa$. From here you can argue that $\mathcal{P}(10\kappa)$ is the same size as $\mathcal{P}(\kappa)$, and thus enumerating $\mathcal{P}(\kappa)$ by $\kappa$ is possible if and only if enumerating $\mathcal{P}(10\kappa)$ by $\kappa$ is possible.

This gives you flexibility. You are now starting with a list of length $\kappa$ of subsets of $10\kappa$. For my convenience, I'll write it as a function $\phi: \kappa \to \mathcal{P}(10\kappa)$. For each $x \in \kappa$ you now have ten choices. You can put $x \in B$ iff the $x$ of the first copy of $\kappa$ in $10\kappa$ is not in $\phi(x)$. Or you might have a change of heart and put some other $y \in B$ iff the $y$ in the seventh copy of $\kappa$ in $10\kappa$ is not in $\phi(y)$. The upshot is that you can now build a huge family of possible sets $B$, one for each decision of copy of $\kappa$ for each $x$, and the same Russellian argument shows that any such $B$ is unlisted. This grants you the same flexibility as the diagonal argument (in fact, a little more, by the celebrated theorem "10>9," also known as my inability to plan in advance).

This argument can of course be distilled down to a couple lines by working in an appropriate sequence space, or performing more opaque cardinal arithmetic and so on, but I think it might be the most compelling to somebody at a high school level. I think it also lends itself well to drawing on the blackboard. Of course, there might be a yet simpler way of incorporating this sort of flexibility into the argument.

Of course, this should be taken in tandem with the advice of the other answers, which essentially explain why this flexibility doesn't matter, and the "vastness" between a set and its power set is not something one can easily formulate precisely (for good reason!).

I'll cross my fingers I didn't completely misunderstand your question!

Tags:

Set Theory