$\epsilon, \delta$...So what?

I believe that the pushback against $\epsilon,\delta$ definitions (which unfortunately spills over to pushback against $\epsilon,\delta$ techniques) is entirely justified because $\epsilon,\delta$ definitions arise from the (unfortunately widespread) confusion between a statement being formal and a statement being rigorous.

Consider the formal "definition" of continuity of a function $f$ at a point $a$: $$\forall\epsilon\exists\delta\forall x(0<|x-a|<\delta\rightarrow |f(x)-f(a)|<\epsilon)$$ This is just an objuscated way of stating the informal, but rigorous:

For every ball $B_{f(a)}$ centered at $f(a)$, there is a ball $B_a$ centered at $a$ so that $f$ sends every point of $B_a$ into $B_{f(a)}$.

which is logically equivalent to the conceptually clearer, though still informal, though still rigorous:

Whenever the image $f(S)$ of a set $S$ is separated from the image $f(a)$ of a point $a$, the set $S$ was already separated from the point $a$.

which is the contrapositive of the, informal and rigorous, intuitive definition of continuity of $f$ at a point $a$:

Whenever a set $S$ of points are close to a point $a$, the set of images $f(S)$ of those points are close to image point $f(a)$.

I strongly believe that the equivalence of the blocked statements and the IDEA that equivalence expresses, which is that we CAN distill an intuitive notion into a rigorous definition, is much more interesting, important, and memorable, than the formal $\epsilon,\delta$ "definition". Furthermore, I can't even bring myself to calling the formal "definition" a definition, since what it expresses is not a description of what it means for a function to be continuous, but a technique (of $\epsilon,\delta$ proofs) for how to check that a function is continuous.

This, in my opinion, is the reason for the pushback against $\epsilon,\delta$ "definition" and arguments: instead of expressing the rigorous idea or concept of continuity, the $\epsilon,\delta$ "definition" only gives a technique for working with continuity, and, when presented as a definition, only obfuscates the meaning of the concept (in a very efficient way, I might add, since the path from the intuitive and meaningful definition to the $\epsilon,\delta$ definition involves taking a contrapositive...).

Finally, I do think that being aware of how to rigorously translate (as above) from the intuitive definition of continuity to the statement of the $\epsilon,\delta$ technique will certainly not hurt, and I suspect could actually help students in using the ($\epsilon,\delta$) technique, especially with the simple functions that arise in Calculus and basic analysis.

(Someone might criticize the above saying that the notion of a ball is confusing in single-variable Calculus. My perhaps controversial response is that there really isn't any good reason not to teach Calculus using $2$ or $3$ variables from day $1$ and that the narrow viewpoint offered by single-variable Calculus obscures more than it simplifies).


It turns out that engineers, scientists, and financial folks need to use calculus, but they don't need to understand calculus.

The construction of the typical university education feeds all of those students, plus math students, through the same introductory calculus courses. This is done for cost efficiency, and also because of a potentially mis-placed ideal that career mathematicians should teach mathematics to people for whom mathematics is ultimately really just an annoying means to an end.

So eliding $\epsilon - \delta$ arguments streamlines this process, saving trouble for the students and the instructors, at the expense of the math students. But those math students will encounter it later, anyways.

I'm not saying it's the best approach, but it's a bit more efficient perhaps. Mechanical engineers don't want to learn $\epsilon - \delta$, and math professors don't want to teach $\epsilon - \delta$ to students who will never truncate a Taylor series beyond the linear term.


I think that is a complex issue; we have both pedagocical aspects and "foundational" ones.

First, according to my point of view, and assuming that I'm not prepared to discuss the pedagogical side, I think that we cannot avoid in teaching mathematics (and not only) some amount of "dogmatism". Past failure in the efforts to introduce naive set language in advance to elementary arithmetics was significative.

Try for a moment with this "conceptual experiment" : teaching in secondary school algebra and calculus starting from axiomatized $ZF$ and building all mathematical stuff "from scratch" (the empty set) . Do we really think it feasible ?

A recent book by John Stilwell, The Real Numbers An Introduction to Set Theory and Analysis (2013), start with the following consideration :

any book that revisits the foundations of analysis has to reckon with the formidable precedent of Edmund Landau’s Grundlagen der Analysis (Foundations of Analysis) of 1930. [...] so few books since 1930 have even attempted to include the construction of the real numbers in an introduction to analysis. On the one hand, Landau’s account is virtually the last word in rigor. [...] On the other hand, Landau’s book is almost pathologically reader-unfriendly.

I've tried re-reading Landau : it is very "unfriendly" !

Second : please, don't forget the enormous amount of effort it takes, form Newton and Leibniz until (at least) Cauchy (see the wonderful book of Judith Grabiner, The Origins of Cauchy's Rigorous Calculus - 1981) to "distill" the rigorous $(\epsilon − \delta)$ definition! And also mathematical standards of "rigor" are evolving.

I spoke above about "dogmatism" (suggestion : think how to apply Thomas Khun's considerations in SSR about the "positive" role of dogmatism in "normal science" to mathematics).

My personal feeling is that the best antidote to the (unavoidable) use of dogmatism in teaching is the historical perspective: to learn how we arrived at current ideas (included our current standard of rigor and our current ideas about "foundations") can be very useful.