Why is continuous differentiability important?
For real functions, the difference between $C^n$ and $n$-times-differentiable is quite thin: see, for instance, Lebesgue differentiation theorem or Darboux's theorem. We may prove Taylor's formula in both contexts, but the strategy is a bit different: to use De l'Hopital theorem for $n$-times differentiable functions or just repeated integration by parts for $C^n$ functions (that is easier).
The continuity of a derivative plays a major role in the implicit function theorem, but if the destiny of the functions we are manipulating is to be put under an integral sign, we may essentially neglect such subtlety: if $f$ is a differentiable function and $f'$ is a weak derivative of $f$,
$$ f(0)+\int_{0}^{x}f'\,d\mu $$
is a regularized version of $f(x)$ that equals $f(x)$ almost everywhere.
One reason $C^1$ is important is its practicality. Namely, there is a theorem that if $f$ is $C^1$ on an open set $U$ then $f$ is differentiable at all points of $U$. It's usually pretty easy to check $C^1$: often you simply look at the form of the coordinate functions of $C^1$ and observe, from your knowledge of elementary calculus, that they are differentiable and their derivatives are continuous. And once you've completed that check, voila, you conclude that $f$ is differentiable.
$C^2$ is also practical, namely via the theorem on equality of mixed 2nd partials (and, similarly, $C^n$ implies equality of mixed $n$th partials).
For subsequent use, define the function $$ \phi(x) = \begin{cases} x^{2} \sin(1/x^{2}) & x \neq 0, \\ 0 & x = 0, \end{cases} $$ which is differentiable on the entire line, but $\phi'$ is unbounded in every neighborhood of $0$ (and, in particular, is discontinuous at $0$).
The $x$-axis is tangent to the graph $y = \phi(x)$ at the origin. Arbitrarily close to the origin, however, there exist lines that are tangent to the same graph, but arbitrarily close to vertical (i.e., with arbitrarily large positive or negative slope). That's not exactly a violation of a theorem, but does violate naive expectations. (By contrast, if $f$ is of class $C^{1}$, then "nearby points have nearby tangent lines".)
Let $f$ be a differentiable real-valued function on some open interval containing $[a, b]$. Working with the Riemann integral, the conclusion of the second fundamental theorem $$ \int_{a}^{b} f'(x)\, dx = f(b) - f(a) \tag{1} $$ is not automatic, because $f'$ may fail to be Riemann integrable. (For example, take $f = \phi$ and $a < 0 < b$.) Equation (1) does hold, however, if $f$ is of class $C^{1}$.
If $M > 0$ is arbitrary, the function $f(x) = \phi(x) + Mx$ is differentiable and satisfies $f'(0) = M$, but there is no interval about $0$ on which $f' > 0$, and therefore (by the Mean Value Theorem) no interval about $0$ on which $f$ is increasing. This type of phenomenon does not occur for $C^{1}$ functions.
Similarly, with $f$ as in the preceding item, the function $F(x) = \int_{0}^{x} f(t)\, dt$ is twice-differentiable everywhere, but not of class $C^{2}$. Though $F''(0) = M > 0$, there is no neighborhood of $0$ on which $F$ is convex.
This shows a sampling of "well-known calculus facts" that hold for continuously-differentiable functions $f$, but that go awry when the first (or second) derivative is discontinuous at one point. In fact, there exist differentiable functions whose derivative is (e.g.) discontinuous on a set of full Lebesgue measure. See David Renfro's epic answer to Chris Janjigian's question How discontinuous can a derivative be? for details.