The sets in mathematical logic

I have been asked this question several times in my logic or set theory classes. The conclusion that I have arrived at is that you need to assume that we know how to deal with finite strings over a finite alphabet. This is enough to code the countably many variables we usually use in first order logic (and finitely or countably many constant, relation, and function symbols).

So basically you have to assume that you can write down things. You have to start somewhere, and this is, I guess, a starting point that most mathematicians would be happy with. Do you fear any contradictions showing up when manipulating finite strings over a finite alphabet?

What mathematical logic does is analyzing the concept of proof using mathematical methods. So, we have some intuitive understanding of how to do maths, and then we develop mathematical logic and return and consider what we are actually doing when doing mathematics. This is the hermeneutic circle that we have to go through since we cannot build something from nothing.

We strongly believe that if there were any serious problems with the foundations of mathematics (more substantial than just assuming a too strong collection of axioms), the problems would show up in the logical analysis of mathematics described above.


The short answer is that there is no way to be absolutely certain that mathematics is free from contradiction.

To start with an extreme case, we all take for granted a certain amount of stability in our conscious experience. Take the equation $7\times 8 = 56$. I believe that I know what this means, and that if I choose to ponder it for a while, my mind will not somehow find a way to conclude firmly that $7\times 8 \ne 56$. This may sound silly, but it is not a totally trivial assumption, because I've had dreams in which I have found myself unable to count a small number of objects and come up with a consistent answer. Is there any way to rule out definitively the possibility that the world will somehow reach a consensus that $7\times 8 = 56$ and $7\times 8 \ne 56$ simultaneously? I would say no. We take some things for granted and there's no way to rule out the possibility that those assumptions are fundamentally flawed.

Suppose we grant that, and back off to a slightly less extreme case. Say we accept finitary mathematical reasoning without question. People might disagree about the precise definition of "finitary," but a commonly accepted standard is primitive recursive arithmetic (PRA). In PRA, we accept certain kinds of elementary reasoning about integers. (If you're suspicious about integers, then you can replace PRA with some kind of system for reasoning about symbols and strings, e.g., Quine's system of "protosyntax"; it comes to more or less the same thing.) Now we can rephrase your question as follows: can we prove, on the basis of PRA, that ZFC is consistent?

This, in essence, was Hilbert's program. If we could prove by finitary means that all that complicated reasoning about infinite sets would never lead to a contradiction, then we could use such infinitary reasoning "safely." Sounds like what you're asking for, doesn't it?

Unfortunately, Goedel's theorems showed that Hilbert's program cannot be carried out in its envisaged form. Even if we allow not just PRA, but all of ZFC, we still cannot prove that ZFC is consistent. Thus it's not just that we've all been too stupid so far to figure out how to show that ZFC doesn't lead to contradictions. There is an intrinsic obstacle here that is insurmountable.

So your scenario that someone may one day find a contradiction in ZFC cannot be ruled out, even if we take "ordinary mathematical reasoning" for granted. This is not as bad as it might seem, however. ZFC is not the only possible system on which mathematics can be based. There are many other systems of weaker logical strength. If a contradiction were found in ZFC, we would just scale back to some weaker system. For more discussion of this point, see this MO question.


That is to say, mathematical logic is using intuitive set theory. So, is there any paradox in mathematical logic?

Yes, in set theory whose logic is based upon naive set theory there is Berry's paradox.

Consider the expression:

"The smallest positive integer not definable in under eleven words".

Suppose it defines a positive integer $n$. Then $n$ has been defined by the ten words between the quotation marks. But by its definition, $n$ is not definable in under eleven words. This is a contradiction.

A version of Berry's paradox is called Richard's paradox. I don't see an essential difference between the two paradoxes (except that Richard's paradox was brought out by Poincaré, who was concerned with impredicative definitions and Berry's paradox by Russell, who was concerned with types). But the two Wikipedia articles offer very different explanations.

The explanation of Berry's paradox is essentially by type theory (even though it's not named); and the more specific explanation given by recursion theory also seems to fit in the framework of constructive type theory.

Richard's paradox is resolved in the framework of ZFC (and more importantly, of first-order logic). Roughly, the resolution is that not all of what makes sense in set theory whose logic is based on naive set theory should make sense in ZFC; hence in some sense the paradox is only truly resolved in "the metatheory used to formalize ZFC". I don't know any textbook on such a meta-ZFC, which makes me feel uncomfortable about this resolution; but so did Poincaré and Russell, as I understand from their writings.

For this particular purpose I think a second-order ZFC would work just as well as a "meta-ZFC". But then a higher version of the same paradox would only be truly explained by a third-order ZFC, etc. But then again I don't know any textbook on third-order ZFC. Luckily, there are quite a few textbooks on higher-order logic/type theory, and as I understand there are no problems of this kind (Richard/Berry-type paradoxes) in a modern type theory, because it serves as its own logic in essence.

On the other hand, a type theory is said to still require a meta-theory. Does it mean that there must be deeper paradoxes that would not be resolved by type theory?

Added later. I'm a bit amused by this thread, which keeps growing with "orthodox" answers and with their lively discussion in the comments, whereas the "heretical" answer you're reading now has not been challenged (nor even downvoted) as yet.

Perhaps I should summarize my points so that it's easier to object if anybody cares.

1) The OP's question might be a bit informal or vague, but it does make sense as Berry and Richard paradoxes unambiguously demonstrate. One may hold different views of these paradoxes and their resolution, but one possible view (which I believe I picked from Poincare's writings) is that they are indeed caused by the factual mutual dependence between set theory and logic referred to in the OP. ("Set theory" and "logic" meant in colloquial sense here, not referring to a specific formalization.) You cannot just ignore these paradoxes, can you?

2) Type theories may have a lot of disadvantages compared to ZFC with first order logic (including the lack of a canonical formulation and of an equivalent of Bourbaki) but they do break the circularity between set theory and logic and thus do adequately resolve the paradoxes. ZFC is perfectly able to defend itself against these paradoxes but it does not attempt to explain them; for that it sends you to the meta-level, which then ends up with informal speculations about computers and one's experience with finite strings of symbols. (There are of course issues with such speculations, including: exactly which strings are finite, halting problems for idealized computers and finite memory of physical computers, not to mention potential finiteness of information in the physical universe and one's proof-checking software potentially involving higher-order logic already.) So in this case I see type theory as providing a mathematical solution, and ZFC, at best, a metaphysical one.

3) What I don't understand is whether there might exist a kind of type theory that would not need any metatheory (i.e. would serve as its own metatheory). Perhaps someone saying "You cannot get anything out of nothing" or "You cannot have any theory without metatheory" or "we must start somewhere" or "You have to start somewhere" could as well explain why it is impossible? If indeed it is impossible, can this impossibility be witnessed by a specific paradox to completely clarify matters?

Tags:

Lo.Logic