Set theories without "junk" theorems?
I apologize for posting as an answer what should really be a comment, connected to one of Jacques Carette's comments on my earlier answer. Unfortunately, this is way too long for a comment. Jacques asked why we would bother with set-theoretic foundations at all. It happens that I wrote down my opinion about that about 15 years ago (in a private e-mail) and repeated some of it on the fom (= foundations of mathematics) e-mail list. Here's a slightly edited version of that:
Mathematicians generally reason in a theory T which (up to possible minor variations between individual mathematicians) can be described as follows. It is a many-sorted first-order theory. The sorts include numbers (natural, real, complex), sets, ordered pairs and other tuples, functions, manifolds, projective spaces, Hilbert spaces, and whatnot. There are axioms asserting the basic properties of these and the relations between them. For example, there are axioms saying that the real numbers form a complete ordered field, that any formula determines the set of those reals that satisfy it (and similarly with other sorts in place of the reals), that two tuples are equal iff they have the same length and equal components in all positions, etc.
There are no axioms that attempt to reduce one sort to another. In particular, nothing says, for example, that natural numbers or real numbers are sets of any kind. (Different mathematicians may disagree as to whether, say, the real numbers are a subset of the complex ones or whether they are a separate sort with a canonical embedding into the complex numbers. Such issues will not affect the general idea that I'm trying to explain.) So mathematicians usually do not say that the reals are Dedekind cuts (or any other kind of sets), unless they're teaching a course in foundations and therefore feel compelled (by outside forces?) to say such things.
This theory T, large and unwieldy though it is, can be interpreted in far simpler-looking theories. ZFC, with its single sort and single primitive predicate, is the main example of such a simpler theory. (I've left large categories out of T in order to make this literally true, but Feferman has shown how to interpret most of category theory, including large categories, in a conservative extension of ZFC.)
The simplicity and efficiency of ZFC and the fact that T can be interpreted in it (i.e., that all the concepts of T have set-theoretic definitions which make all the axioms of T set-theoretically provable) have, as far as I can see, two main uses. One is philosophical: one doesn't need to understand the nature of all these different abstract entities; if one understands sets (philosophically) then one can explain all the rest. The other is in proofs of consistency and independence. To show that some problem, say in topology, can't be decided in current mathematics means to show it's independent of T. So you'd want to construct lots of models of T to get lots of independence results. But models of T are terribly complicated objects. So instead we construct models of ZFC, which are not so bad, and we rely on the interpretation to convert them into models of T. And usually we don't mention T at all and just identify ZFC with "current mathematics" via the interpretation.
What you are describing is the idea of "breaking" an abstraction. That there is an abstraction to be broken is pretty much intrinsic to the very notion of "model theory", where we interpret the concepts in one theory in terms of objects and operations in another one (typically set theory).
It may help to see a programming analogy of what you're doing:
uint32_t x = 0x12345678;
unsigned char *ptr = (unsigned char *) &x;
assert( ptr[0] == 0x12 || ptr[0] == 0x78 ); // Junk!
const char text[] = "This is a string of text.";
assert( text[0] == 84 ); // Junk!
// Using the GMP library.
mpz_t big_number;
mpz_init_ui(big_number, 1234);
assert(big_number[0]._mp_d[0] == 1234); // Junk!
All of these are examples of the very same thing you are complaining about in the mathematical setting: when you are presented with some sort of 'type', and operations for working on that type, but it is actually implemented in terms of some other underlying notions. In the above:
I've broken the abstraction of a
uint32_t
representing a number modulo $2^{32}$, by peeking into its byte representation and extracting a byte.I've broken the abstraction of a string being made out of characters, by using knowledge that the character
'T'
and the ASCII value84
are the same thingIn the third, I've broken the abstraction that
big_number
is an object of type integer, and peeked into the internals of how the GMP library stores such things.
In order to avoid "junk", I think you are going to have to do one of two things:
- Abandon the notion of model entirely
- Realize that you were actually lying in your theorems: it's not that $2 \in 3$ for natural numbers $2$ and $3$, but $i(2) \in i(3)$ for a particular interpretation $i$ of Peano arithmetic. Maybe making the interpretation explicit would let you be more comfortable?
(Or, depending on exactly what you mean by the notation, the symbols $2$ and $3$ aren't expressing constants in the theory of natural numbers, but are instead expressing constants in set theory.)
Structural set theory, as described on the nlab page you linked to, is probably the best answer to your question. To avoid junk theorems, one must deviate somewhat from ordinary ZF-style set theory where everything is a set. That's because, once you decide, in the context of such a "material" set theory, that 5 and 17 are to be sets (because there's nothing else for them to be), they have to have a union, and there's no intuitively reasonable choice for that. (I said "union" rather than "intersection" because one might consider the empty set a reasonable intersection; but the union can't be empty unless both sets are.) A very elementary (undergraduate) presentation of some mathematics from this viewpoint is in the book "Sets for Mathematics" by Lawvere and Rosebrugh; a more advanced presentation is (if I remember correctly) Paul Taylor's "Practical Foundations of Mathematics".