What are some examples of subtle logical pitfalls?
This isn't necessarily subtle, but...
I've encountered many students who mistakenly conclude that if an implication is true, then the converse must be true.
- That is, mistakenly concluding that if $p \rightarrow q$, then $q \rightarrow p$.
- The same error in reasoning comes when there is a chain of right-directional implications, and then assuming that then shows the equivalence of the original claim and the conclusion at the end of the chain of implications.
- A bit more subtly: I often encounter the erroneous conclusion that if $p\rightarrow q$, then $\lnot p\rightarrow \lnot q$.
- Also, when asked to prove a biconditional "if and only if" statement like $p \iff q$, some stop after proving only $p \rightarrow q$, thinking they are done.
And more subtly, when starting with an equation, then operating on each side of the equation (resulting in an equation), students often assume that whatever is the case about the end result also holds for the original. E.g. when given something of the form, $$\begin{eqnarray} y &=& f(x)\tag{1} \\ \text{So} \;\;y^2 &=& (f(x))^2\tag{2}\end{eqnarray}$$ and then (mistakenly) concluding solutions to $(2)$ are solutions to $(1)$. Or, e.g., given $$\begin{eqnarray}y^2 &=& x^2\tag{3} \\ \text{So} \;\;\sqrt{y^2} &=& \sqrt{x^2}\tag{4}\end{eqnarray},$$ then (mistakenly) concluding that the only solutions to $(3)$ are the solutions to $(4)$.
Additionally, one possible pitfall is not correctly applying DeMorgan's laws:
- As it relates to the distribution of negation over conjunction and the distribution of negation over disjunction: Mistakenly equating $\lnot (p \land q)$ with $ \lnot p \land \lnot q$ or $\lnot (p \lor q)$ with $\lnot p \lor \lnot q$.
- As it relates to containment in the complement of a union of sets and containment in the complement of the intersection of sets: E.g. Making the mistake of equating $\neg(A \cup B)$ with $\neg A \cup \neg B$ (and similarly in the case of the complement of an intersection).
Also, the negation of a quantified proposition seems to be problematic for some: making the mistake of equating $\lnot \forall x, P(x)$ with $\forall x, \lnot P(x)$, and similarly, when negating an existentially quantified statement.
One subtle issue that many people fall prey to is confusing induction on the natural numbers with metainduction on the natural numbers. The difference is that the former is an argument by induction formalized inside a formal "object" theory, such as ZFC, while the latter is an inductive argument carried out in the metatheory that we use to study the object theory.
For example, consider the claim "for every $n \in \mathbb{N}$, the number $n!$ is defined". Here $n!$ is defined to be the product $\prod_{i=1}^n i$; the problem is to show that this product actually has a value for each $n$.
It is tempting to try to prove that by saying "Given $n$, we can write down $n! = n\cdot (n-1) \cdot \cdots \cdot 1$, and thus $n!$ must be defined". But that method does not actually work to prove the statement "for all $n \in \mathbb{N}$, $n!$ is defined" in a theory such as ZFC. There are two reasons:
The proof above, when written out explicitly without an ellipsis, gets longer and longer as $n$ gets bigger and bigger, because it takes more and more space to write out the numbers conveniently omitted by the ellipsis. So the argument above really provides a sequence of formal proofs, one for each $n$ we can write down, that $n!$ is defined.
If a statement is provable in ZFC, it is true in all models of ZFC, and there are models of ZFC in which there are nonstandard natural numbers. For a nonstandard $n$ in some nonstandard model, we cannot write down the "finite" product $n!$, because there are actually infinitely many numbers less than $n$ from the perspective of the model. (For such numbers, even though $n!$ is defined, it is not given by any term in the language of arithmetic.)
The way we actually prove "$n!$ is defined" in ZFC is by induction within ZFC, which is able to formalize the following argument. First, $0! = 1$ is defined. Now assume $k!$ is defined; then $(k+1)! = (k+1)\cdot k!$ is defined as well. Hence, by induction, $n!$ is defined for all $n$. This proof does not rely on our ability to "write" $n$, it simply uses the principle of induction and the fact, provable in ZFC based on the definition I gave, that $(k+1)! = (k+1)\cdot k!$.
Because most mathematics is carried out in natural language, where the distinction between object theory and metatheory is not clear, the distinction above can be easy to overlook. But when we start actually formalizing things in formal theories, the distinction becomes crucial.
There are many examples of formal theories $T$ and statements $\phi(n)$, taking one natural number as argument, such that for every $n$, $T$ proves $\phi(n)$, but at the same time $T$ does not prove $(\forall n)\phi(n)$. The way such examples are usually obtained it by showing "for each $n$, $T$ proves $\phi(n)$" by metainduction (induction in the metatheory), and then using some other technique to show "$T$ does not prove $(\forall n)\phi(n)$."
I think this is probably the subtlest point I have yet to encounter in set theory.
Let's assume the universe of sets satisfies $\sf ZFC$.
Fact I: For every $T_0$, a finite subset of $\sf ZFC$, we can prove in $\sf ZFC$ that $T_0$ has a model.
Fact II: The compactness theorem is provable from $\sf ZFC$, so if a theory $T$ is such that every finite fragment has a model, then $T$ has a model.
Fact III: If $\sf ZFC$ is consistent, then $\sf ZFC$ cannot prove its own consistency. In particular, if we assume that the universe of sets satisfies $\sf ZFC$, then it is impossible to prove that there is a set which is a model of $\sf ZFC$.
On first sight it seems that the first two facts should contradict the third! But the truth is that the first fact is proved in the meta-theory. And we cannot apply compactness arguments in the meta-theory, and have the results in the theory itself!
This pitfall is $\large\sf\text{confusing between meta-theory and theory}$. It is very easy to fall for it when making your first steps in set theory and consistency results.