A very general method for proving inequalities. Too good to be true?
Actually, this is one approach to doing inequalities, and sometimes it is used. But there are some caveats that need to be paid attention to:
It is not always the case that those multi-variable functions are differentiable at all points that you are interested in. (And in some cases, the function is differentiable a.e., but the function is so bizarre that its derivative does not contain too much information about the function's variation - this is rare though, usually rises theoretically but not in practical use cases.)
Even if 1 is not an issue, sometimes it is not easy to get the differentiation of the function - it could be tricky to calculate differential values for certain points, or the derivative is very involving for the function expressed in nested combinations of elementary operations/functions, or the derivative may not be in closed-form if the function is not elementary, etc. (Plus if the differentiation is in a complicated form, it is also difficult to solve the equations of which derivatives equal to zero.)
Even if 2 is not an issue, it is not guaranteed that the point you get by having first-order partial derivatives equaling to zero is local minimum/maximum, you might just get a saddle point - thus second-order derivatives need to be checked as well (notice also there is no guarantee that second-order derivatives exist)
Even if 3 is not an issue, you still cannot be sure that the local maximum/minimum is global maximum/minimum - e.g. it could be possible that points at the boundary are larger than local maximum point you get. So check more before you make the final conclusion - For example, when finding the global maximum:
- Compare all local maximum points (if there is any, and make sure they are local max);
- Compare all points at the boundary (if there is any);
- Check function's behavior as it approaches infinity (if applicable);
- Check where first-order derivatives do not exist (if any) - they could be maximum points;
- and so on..
The answer by @yujie-zha misses a few important points, so let me elaborate on this.
Congratulations, you rediscovered a well-known theorem which is sometimes called in France “principe de Fermat,” sharing the same name than a reflection law in optics and the remark according to which there is no strictly decreasing sequence in positive numbers. Pierre de Fermat was a French mathematician living in the 17th century.
Your discussion covers a lot of useful cases but it has two major flaws:
A point where the quantity $F = f - g$ reaches its minimum does not need to exist. A simple example is provided by $1/x$ for $x\gt0$.
If such a point exists, the geometry of the set where the minimum of $F$ is realised can be arbitrarily complicated, so even if it has infinitely many points, in does not need to be so nice that you actually can remove a variable. Functions $F$ which allow to remove variables are called submersions. Furthermore, finding that point can be impracticable.
Regarding 1, a common hypothesis guaranteeing that the minimum is actually realised is when the function $F$ we study is continuous and defined over a compact domain. This might not tell you much for now, but you can realise what happen if you try to build a function always positive on the real line but whose minimum “flees to the infinity” – a colloquial way to say that it's asymptotically zero, if you are already familiar with these terms.
Regarding 2, a theorem by Borel states that any closed subset of an euclidean space is the zero locus of a smooth function. This will also probably not tell you much, but you can try to find functions $F$ that are positive, everywhere differentiable over the real line and whose zero locus is:
- The set of integers.
- The set of inverse of integers, to which we add the point $0$.
- The segment $[-1,1]$.
- The Cantor set (if you know this one).
You can also look for similar examples in higher dimensions.
This aside, the method you suggest actually works, but happens to be not as easy to use as you seem to think. Other common sources for inequality are convex functions, esp. Jensen's inequality, the Cauchy-Schwarz or Cauchy-Bouniakovski inequality, and the modulus of a holomorphic function.
There are a few issues with this method.
- First, it's unclear whether you are trying to solve an inequality or prove an inequality. They're not the same thing: Proving an inequality means showing that it's true for all values of the variables (as for example in the inequality $(x+y)^2 \ge 4xy$), whereas solving an inequality means finding the values of the variables for which the inequality is true (as for example in the inequality $x^2 - 1 < 2x$, which is true for $x \in (1-\sqrt{2},1+\sqrt{2})$.
Of course it could be argued that part of solving an inequality always involves a proof, but still it is important to know whether you are trying to show that a given inequality holds for all values of the variable(s), or whether it holds for some values of the variable(s).
- You write:
Now, in order to prove our inequality, we have to prove that the LHS has a minimum value and that that minimum value is greater than or equal to 0
This is not true. For a simple counterexample, consider the inequality $e^{-x}>-2$, which is definitely true for all (real) $x$. But the function $f(x)=e^{-x}+2$ has no minimum value. It has a lower bound, even a greatest lower bound (namely $f(x) \ge 2$) but it never actually attains that greatest lower bound.
Even if a function does attain a minimum value, there is no guarantee that the derivative there is $0$. For example, the function $f(x)=x^{2/3}$ is defined on all of $\mathbb{R}$ and attains its minimum value at $x=0$, but the derivative there is not $0$ because the function isn't differentiable at $x=0$. In general, remember that critical values of a function occur where the derivative is zero or undefined.
If you are dealing with a function that is defined on some restricted domain (rather than on the whole real line), a minimum value might be attained at an endpoint, rather than at a critical value. For example if we consider the function $f(x)=\tan(x)$ defined on $[0, \pi/2)$ (which would be the reasonable domain if $x$ represents an angle in a right triangle) then the minimum occurs at $x=0$.
Having said all of this, it is true that if you can find the global minimum of a function, and show that minimum value is positive, that would establish the inequality you are trying to prove. But that's a pretty big "if".