Baby Rudin: Theorem 3.20 (c) and (d)

For the first one, assuming $x\geq0$ :

$$ \begin{align} (1+x)^n &= \sum_{k=0}^n \binom{n}{k}x^k \\ &=1+nx+\frac{n(n-1)}{2}x^2+\cdots+x^n \\ &\geq \frac{n(n-1)}{2}x^2 \end{align} $$

For the second one :

$$ \begin{align} \binom{n}{k}&= \frac{n(n-1)\cdots(n-k+1)}{k!} \\ &=\frac{1\left(1-\frac{1}{n}\right)\cdots\left(1-\frac{k-1}{n}\right)}{k!}n^k \\ &>\frac{(1/2)(1/2)\cdots(1/2)}{k!}n^k \\ &=\frac{n^k}{2^kk!} \end{align} $$


Prove that the sequence $v_n = \frac{n^{\alpha }}{(1+p)^n}$ goes to $0$, where $p \gt 0$ and $\alpha$ is any real number.

It might be helpful to examine what this is saying before jumping into the proof. To give $v_n$ a tough time (of going to $0$), think of a really big $\alpha$, like $10^6$, and keep $p$ small, like $10^{-6}$ (a big numerator and a small denominator). Proving (d) is showing that exponential growth (denominator) trumps 'polynomial' growth (numerator). Instead of $(1+p)^n$ in the denominator, why didn't Rudin use $\beta^n$ with $\beta \gt 1$? Ans: He knew that he wanted to 'crank out' a 'polynomial type thing' for an exponential.

Now, it only gets interesting when $\alpha$ is large, so naturally we are hoping that by taking an integer $k \gt \alpha$ we can make some headway. Also, if you expand $(1 + p)^n$ you are hoping to find a single term that does the trick. Considering the shape of Pascal's Triangle, the big number 'meat' should be found in the middle. So the first thing to try is letting $n$ to be twice as big as $k$.

So, by inspection, if $n = 2k$,
${2k \choose k} \gt \frac{k^k}{k!} = \frac{(2k)^k}{2^k k!} = \frac{n^k}{2^k k!} $

These are some of the tactics behind the terse proof supplied by Rudin.


The first inequality can be derived, for $x\ge0$, from the binomial theorem: $$ (1+x)^n=\sum_{k=0}^n \binom{n}{k}x^k $$ If we remove all the (non negative) terms except the one for $k=2$, we get $$ (1+x)^n\ge \binom{n}{2}x^2 $$