Why not define infinite derivatives?
It is a matter of convention and agreement between mathematicians.
For me, there is no problem if you say that the function is differentiable at some point $c$ even if $\lim_{h \to 0} \frac{f(c+h) - f(c)}{h}$ is equal to $+\infty$ or $- \infty$. This would only extend differentiability of some functions to some larger set, for example your example $x \mapsto x^{\frac 13}$ would with your definition be differentiable on $\mathbb R$ and not only on $\mathbb R \setminus \{0\}$. So I would not say as you say that derivatives of some functions in some points that are equal to $+\infty$ or $- \infty$ are not well-defined, they are well-defined, it is just that we usually take by definition that the derivative of some function at some point is a finite number.
Situation is, viewen in this way, similar to that of series.
You could define that the series $\sum_{i=1}^{\infty}a_i$ of real numbers is convergent if the limit $\lim_{n \to \infty}\sum_{i=1}^{n}a_i$ exists and is equal to some member of the set $\mathbb{R} \cup \{+\infty, -\infty\}$. With this definition, for example, the harmonic series $\sum_{i=1}^{\infty}\frac {1}{n}$ would be a convergent series.
The only "problem" that I see with these extended definitions of the derivative at some point and convergence of the series is that maybe we would have to, when proving some theorems, replace the assumption Suppose that $f$ is differentiable at some point $c$... with the assumption Suppose that $f$ is differentiable at some point $c$ and that the derivative at that point is not equal to $+ \infty$ or $- \infty$... (and similarly for the series(and integrals)).
So, I would say that there is nothing wrong with your extended definition.
You would lose the sum, product, and quotient rules for derivatives. You would lose the chain rule. You would lose the fact that a derivative at a point implies continuity at that point. The intermediate value theorem would no longer apply to differentiable functions. You lose the Darboux property of derivatives. Say goodbye to Taylor. Our freshmen are going to love it!
I agree with the above answer of @Farewell.
Another aspect worth considering in my opinion is the inverse function theorem.
If a function with "derivative" $\pm \infty$ has an inverse, then in many cases the derivative of the inverse at the point will be $0$. (Basically we get a vertical line with a horizontal line when we switch the dependent and independent variables.)
First let's try to consider some geometric problems associated with having "derivative" $\pm \infty$ of a one-dimensional function. At that point, the associated tangent line would clearly be vertical.
But therein lies a major issue: how can one consistently define the slope of a vertical line? One can't -- it's impossible because both $\infty$ and $-\infty$ will be equally reasonable choices -- this none-uniqueness problem doesn't happen for any other type of tangent line by the way.
Sure, in the case of $x^{-1/3}$ one could argue that "by continuity" the slope should be defined to be $+\infty$. But what about $\sqrt{x}$ and $-\sqrt{x}$? By continuity the derivative of one at 0 would be $+\infty$ and the other would have derivative $-\infty$ at 0, but both would correspond to the same tangent line of the curve $x=y^2$.
In more than one dimension, the geometric problems associated with trying to define an "infinite derivative" are even worse. Specifically, an "infinite derivative" would correspond to the non-existent inverse of a singular matrix, and there are literally uncountably many ways in which a matrix can be singular (i.e. not be invertible and have determinant zero), so any attempt to find a reasonably small number of "pseudoinverses" for all singular matrices would not be tractable.
(Moreover the space of invertible matrices has a nice property called "openness" which is similar to the idea of an open interval that non-invertible matrices simply do not have. Think of it this way: the set of real numbers which have well-defined reciprocals is $(-\infty,0) \cup (0,\infty)$ -- two open intervals, whereas the set of real numbers that don't have a well-defined reciprocal, $\{0\}$ is a point (points have the property of being "closed"). A similar situation exists in higher dimensions.)
In the proof of the inverse function (for a general number of dimensions, including $n=1$), we rely on the derivative being "non-zero" (in a generalized sense) in order to show that we can find a local inverse for the function centered at that point.
The proof doesn't go through when the derivative is "zero" because we can't define a unique value for the derivative of the local inverse function at that point (again, this is even true for $n=1$ as I mentioned above).