Cauchy reals and Dedekind reals satisfy "the same mathematical theorems"

Here is a way to do this in ZFC. Similar ideas work in a bunch of other contexts.

First, given any set $A$ in the universe of sets $V$ we can form the set theoretic universe $V(A)$ by mimicking the cumulative hierarchy, where the elements of $A$ are considered to be atoms. Start with $V_0(A) = A$, at successors $V_{\alpha+1}(A) = V_\alpha(A) \cup \mathcal{P}(V_\alpha(A))$, at limits $V_\delta(A) = \bigcup_{\alpha<\delta} V_\alpha(A)$. (Some care must be taken to carefully distinguish atoms. Indeed, $A$ will appear at some point in the pure part of $V(A)$ and we don't want to confound this pure $A$ with the set of atoms $A$. Fortunately, it is well-understood how to do this formally. Since these details are irrelevant, I will not mention them further.)

If $A$ has additional structure, say it's a complete ordered field, then that structure will appear quickly in the hierarchy since we add all possible subsets at each step. Therefore $A$ has all the same ordered field structure it originally had in $V$. Even completeness carries through since the subsets of $A$ in $V(A)$ come from subsets of the original $A$ in $V$. The difference is that $A$ has no internal structure in $V(A)$ since we can't inspect the innards of atoms: all we can say about atoms is whether two atoms equal or not. The main kicker is that if $A'$ is any isomorphic structure to $A$, then the isomorphism of $A'$ and $A$ lifts uniquely to an isomorphism of $V(A')$ and $V(A)$!

A normal mathematical statement about $A$ in $V$, say BSD, makes perfect sense about the structure $A$ in $V(A)$. This is because BSD makes no mention at all of the innards of the elements of $A$. Furthermore, if BSD holds of the original $A$ in $V$ then it will hold of the $A$ in $V(A)$ since they have identical external structure. Because $V(A')$ is isomorphic to $V(A)$, the isomorphism ensures that BSD holds of $A'$ in $V(A')$. Then, for the reverse reason explained above, BSD holds of the original $A'$ in $V$.

For this transfer from $A$ to $A'$, we only needed that BSD was a normal mathematical statement in the sense that it relies only on the external structure of $A$ and $A'$ and not on the innards of these structures. Whether some proof of BSD for $A$ relies heavily on the innards of $A$ is irrelevant since the statement proven makes no mention of the internal structure of $A$ and will therefore transfer to any isomorphic structure as described above.


Here's a low-tech way to look at it, which to me seems perfectly convincing.

Let C be some implementation of the reals via Cauchy sequences and D be some implementation of the reals via Dedekind cuts. Here C is "really" something like a tuple consisting of the set of reals, a relation corresponding to addition, etc.; D is a tuple with (allegedly) equivalent things implemented differently.

Let P(X) be the proposition that X is a tuple of the right size and that, when considered as an implementation of the real numbers, X satisfies the Birch-Swinnerton-Dyer conjecture. We have a proof -- perhaps a bizarre incomprehensible implementation-dependent one -- of P(C), in ZFC.

I claim that (again, in ZFC) P(C) iff P(D). Sketch of proof: 1. Up to canonical isomorphism there is only one complete ordered field. 2. C and D are complete ordered fields. 3. Therefore there is an isomorphism between C and D; in fact we can even write it down. 4. We can use this to build an isomorphism between C's complex numbers and D's complex numbers, and then between C's L-functions and D's L-functions, and C's elliptic curves and D's elliptic curves, and so on for every object required to state the BSD conjecture. 5. If we have a specific elliptic curve over D, these isomorphisms yield its equivalent over C (and vice versa); they pair up the groups of rational points in the two cases, showing that they have the same rank; they pair up the corresponding L-functions, showing that they have the same order of zero at s=1. 6. And we're done.

None of this requires that these isomorphisms be applied to the proof of P(C). That proof can be as C-specific as you like. What the isomorphisms show is that the things BSD says are equal come out the same way however you implement the real numbers.

How do we know that we can actually construct this pile of isomorphisms? By thinking about what objects we need in order to state the BSD conjecture, and how we build them, and noting that nothing in the process cares about "implementation details" of the real numbers. If you're sufficiently confident of your memory, you could do this "negatively" by noting that if when you were learning about elliptic curves and L-functions a lecturer had said something like "and of course this is true because the number 1 is just the same thing as the set containing just the empty set" you'd have noticed and been horrified. Otherwise, you can (tediously but straightforwardly) go through the usual textbooks and check that the constructions are all "sane".

EDITED to add:

Although I stand by everything above, I can't escape the feeling that Kevin already knows all that and I'm therefore not answering quite the question he's meaning to ask. Let me put some words into Kevin's mouth:

Yes, yes, of course. Every mathematician who thinks about this stuff at all has something like that mental picture. But what really justifies that breezy confidence that that big pile of isomorphisms is really there? I understand that it feels obvious that none of the machinery you need to state something like the BSD conjecture depends on "implementation details". But this is the sort of thing mathematicians are good at getting wrong. It wasn't until the 20th century that we noticed how many extra axioms you really need to add to Euclid's system to make the proofs in the Elements rigorous. The axiom of comprehension probably seemed obviously innocuous until Bertrand Russell asked whether the set of non-self-membered sets shaves itself. A more isomorphism-y example: it seems transparently obvious that a set $X$ is the same size as $\{\{x\}\,:\,x\in X\}$, but this fails if you work in NF instead of ZFC. Maybe there's some implementation detail no one ever noticed we were assuming. How can we be sure?

Again, personally I'm very confident that I'd have noticed if some implementation detail were being slipped into anything in "normal" mathematics (or at least, I'm as confident as that I'd have noticed any other sort of gap in the proofs -- I don't think there's anything special here), and very confident that if I missed one some of the many many other mathematicians, some of them much smarter than I am, who have read the same textbooks and been to the same lectures would have noticed. But I think Kevin's asking whether there's some simple principle that makes it obvious without any need either to trust that sort of thing, or to check in detail through everything in the textbooks, and I want to be clear that this answer doesn't purport to give one; my feeling is that there couldn't possibly be one, any more than there could be some simple principle that makes it obvious (with the same restrictions) that there are no other logical holes in those same textbooks, and for essentially the same reason.


From a logical viewpoint, this has nothing to do with platonism, ZFC, or the cumulative hierarchy. $ \def\nn{\mathbb{N}} \def\zz{\mathbb{Z}} \def\qq{\mathbb{Q}} \def\rr{\mathbb{R}} \def\cc{\mathbb{C}} $

Almost all reasonable mathematical statements about the reals are actually about any structure that satisfies the axiomatization of the reals. It is clear that this axiomatization can be expressed in very weak foundational systems, whether or not compatible with ZFC. Of course, if you are only familiar with ZFC then you may have to look at how things go in ZFC (as François G. Dorais has explained). But ZFC is really a red herring here.

The Cauchy-sequence or Dedekind-cut constructions merely serve to prove the existence of such a structure that satisfies the axiomatization of the reals. From then on, we can literally forget the exact objects in the construction (which is precisely what $∃$-intro does), because we are only interested in theorems concerning the axiomatized properties (interface) of the reals. Similarly when you construct the complex numbers by a quadratic extension of $\rr$ by some object $i$ such that $i^2 = -1$ in the field extension $\rr(i)$, it is completely irrelevant what objects are 'used' as elements in the field extension. For instance, you could use linear polynomials in $X$ with addition and multiplication modulo $X^2+1$. All that matters is that you get an algebraically closed field containing an isomorphic copy of the reals. Relatedly, we can assume that $\rr ⊆ \cc$ because we only care about the axiomatized properties of $\rr$, which are preserved under isomorphism. One could manually preserve the original $\rr$ as an actual subset of its quadratic extension, but that is unnecessary for the reason I just stated.

Long before $\rr$, to even get from $\nn$ to $\zz$ we could either encode an integer as a sign with a magnitude, or as an equivalence class of pairs from $\nn$. Does it matter? No, because all we care about are certain properties.

If someone claims to have proved something about reals but their proof needs to look at the concrete implementation of reals, then that someone simply has taken a silly route. This is akin to expressing an algorithm in the SOAP assembly language for the IBM 650, instead of expressing it in at least a high-level language supporting loops and function calls. Good software is always written to separate interface from implementation, and so are good proofs (whether in a formal system or not).

Consider simple examples. The IVT (intermediate value theorem) concerns continuous functions on a closed bounded interval of the reals. To state it directly, we must be able to quantify over real functions. This only needs 3rd-order arithmetic (since a real can be naturally encoded as a function of naturals, which is 2nd-order, so a function from reals to reals would be 3rd-order). More generally, if you want to talk about objects in specific higher-order types where the 0th-order type is the naturals, then all you need is HOA (higher-order arithmetic). Practically any modern foundational system for mathematics can interpret HOA, namely that there is a computable translation of proofs from HOA into the system that interprets it. You can check that Z set theory for instance interprets HOA, and if you want some extra interesting sets you might want some form of AC (axiom of choice).

Anyway, IVT is provable in HOA using only the axiomatization of the reals. And so are EVT (extreme value theorem), MVT (mean value theorem), Dini's theorem for real functions, ... You only need to go beyond HOA if you want to handle arbitrary types, such as general metric spaces, topological spaces and so on. Even then, every mathematical structure of interest will be defined via axiomatization, and all proofs based on that axiomatization alone would of course carry over to all those structures.

There is one possible snag, namely what if the proof was found by a computer rather than a human? Well, if the proof is really just one huge mess, then the easiest solution has been provided by Gareth McCaughan: We can tack on a proof of the equivalence of the desired theorem about Cauchy-reals with the same theorem stated for any isomorphic copy of the reals, and hence we can treat the given computer-generated proof as a black-box. More generally, we can write a computer program $P$ such that, given any desired statement $Q$ about a model $M$ of some second-order axiomatization $A$ that only uses $M$ via its interface $A$, $P(Q)$ outputs a proof that $Q(M)$ implies $Q(N)$ for every model $N$ of $A$. Then we do not even have to manually construct such kind of tack-on proofs but can just run that single program $P$ on any theorem that that 'wag' throws at you, and not just for those about reals. The exact details would depend on the chosen foundational system, but Z set theory certainly suffices.