A very tricky pseudo-proof of $0=-1$ through series and integrals

The error is at the very beginning, in the interchange of integral and summation. I’m too rusty (and too lazy!) to go beyond checking that the sequence of functions fails to satisfy a standard sufficient condition for the interchange, but the other steps are legitimate, so that must be the sticking point.

In the U.S. calculus courses that I've taught or observed, this material would come in Calc. $2$, to the extent that it appeared at all, and the interchange of integral and summation wouldn't appear at all; that makes it an essentially impossible exercise. Moreover, most students in typical first-year calculus courses still have the notion that mathematics is algorithmic calculation; getting them to pay enough attention to details to understand why the non-existence of a zero of $\frac1x$ doesn’t contradict the intermediate value theorem, or even to remember that the sign of $x$ matters when multiplying an inequality $f(x)\le g(x)$ by $x$, is a non-trivial challenge, to the extent that the former often goes by the board.

This might be appropriate for a very good old-fashioned advanced calculus course; the undergraduate real analysis courses that I taught had a different emphasis and didn’t cover the necessary material.


$$ \int_0^1 \sup_N \left| \sum_{n=0}^N x^n (1+(n+1)\log x) \right| \, dx = +\infty $$

The dominated convergence theorem says $$ \lim_N \int f_N = \int \lim_N f_N \quad\text{if } \int \sup_N |f_N| < \infty. $$ That's too big an "if" in this case.