$f : (0,1) \rightarrow \mathbb{R}$ countinous with non-negative right-hand derivative is non-decreasing

Suppose first that for all $x$, we have $f_{+}^{\prime}(x)>0$. Then, for all $x\in{]0,1[}$, we have $f(y)>f(x)$ for all $y>x$, close to $x$.

Let $a,b\in (0,1)$, and put $g(x)={\rm Sup}\{f(t), t\in [a,x]\}$ for $a\leq x\leq b$. As $f$ is continuous on the compact $[a,x]$, there exists a $c_x\in [a,x]$ such that $g(x)=f(c_x)$. Suppose that $c_x<x$. Then by the above, there exist $y$, $c_x<y<x$, close to $c_x$, with $f(y)>f(c_x)$, a contradiction with the definition of $c_x$. Hence we have $c_x=x$, and $g(x)=f(x)$. As obviously $g$ is increasing, we have proved that $f$ is increasing on $[a,b]$, and hence on $]0,1[$.

Now if you have only $f_+^{\prime}(x)\geq 0$, replace $f$ by $f(x)+\varepsilon x$, ($\varepsilon>0$) this function is increasing by what we have said above, and let $\varepsilon \to 0$.


"You assume too much" as several characters in Star Wars say. Some things are easier to prove if you assume less. In this case the existence of the one-sided derivative is misleading since it encourages you to search for methods appropriate to that. There is a bit of advice for problem solving that most of us learned from Polya: if there is a problem that you can't solve find a more general (perhaps easier) problem that you can't solve.

The posted solution may look somewhat magical since the definition of the function g there might not have occurred to you. But what must certainly have occurred to you is to try for a contradiction. In other words, imagine that there are points a < b with f(a) > f(b). Since f is continuous there must be a last point c before b where the function does not rise above the line y = f(a). Just look carefully at the point c where f(c)=f(a) and where the function subsequently goes down.

The "last point" argument here is quite ancient. (This is essentially what the other solution does.) It goes back at least to a paper of Dini in 1878. For that you don't need a positive right-hand derivative --- a positive upper right-hand Dini derivative suffices. You also don't need it everywhere, you can have a countable set of exceptional points where you don't know this but that is harder to prove. In fact there is a generalization due to Zygmund where you can assume even less.

The most accessible reference for this and similar ideas is Saks, Theory of the Integral, pp.203-204. This was published in 1937 and so scanned copies can be found on the internet with a bit of searching.

I gather the problem was a homework assignment and the originator is happy enough to solve it and get on with other stuff. Do remember, however, that this is training for research and that it pays to think deeper about most problems and to pursue the history where possible.


Of course methods are more important than answers so I hope you will indulge another answer to this problem. Our amico appassionato who posted the question suggested a method that he couldn't make work. But the idea was fine and the idea does work if you persist.

We assume that the continuous function $f$ has a positive right-hand derivative at each point (or even just a positive upper right Dini derivative). We fix $ x < y$ and want to conclude that $f(y)>f(x)$.

Set $x_0=x$ and choose $x_{n}<x_{n+1}<y$ so that $f(x_1)>f(x_0)$, $f(x_2)>f(x_1)$, $f(x_3)>f(x_2)$, $\dots$ inductively.

[This is more or less exactly what the poster had in mind.]

Define $x_\omega$ as the limit of the sequence $\{x_n\}$. Unfortunately $x_\omega$ may not be $y$ but at least, using continuity, we know that $f(x_\omega)>f(x)$. Just keep on going. This will require transfinite induction but in a countable number of steps you will certainly reach $y$.

The intervals $[x_\alpha, x_{\alpha+1}]$ form what is known as a Lebesgue chain and solve the problem much in the way that the poster thought might work. The only extra idea needed was to drop the hope that it could be done in a finite number of steps. This method used to be rather popular long ago. Is it still taught?

[Added: Oh, by the way, the hope for a finite number of intervals (instead of a countable number as here) should have disappeared pretty fast when you realize that you haven't then any need for continuity. But a simple counterexample would show the problem doesn't work without continuity or some similar assumption.]