(Sharp) inequality for Beta function
You can get away with the usual distribution function mumbo-jumbo. The general lemma is as follows:
Let $\mu,\nu$ be non-negative measures and $f,g$ be non-negative functions such that there exists $s_0>0$ with the property that $\mu\{f>s\}\ge \nu\{g>s\}$ for $s\le s_0$ and the reverse inequality holds for $s\ge s_0$. Suppose also that $\int f^q\,d\mu=\int g^q\,d\nu<+\infty$ for some $q>0$. Then, as long as the integrals in question are finite, we have $\int f^p\,d\mu\ge \int g^p\,d\nu$ for $0<p\le q$ and the reverse inequality holds for $p\ge q$.
The proof of the lemma is rather straightforward. Let $p\le q$ (that is the case you are really interested in) $$ \int f^p\,d\mu-\int g^p\,d\nu=p\int_0^\infty s^p[\mu\{f>s\}-\nu\{g>s\}]\frac{ds}s \\ =p\int_0^\infty [s^p-s_0^{p-q}s^q][\mu\{f>s\}-\nu\{g>s\}]\frac{ds}s\ge 0\,. $$
Now we use it with $f(t)=t(1-t)^x$, $d\mu=\frac{dt}{t(1-t)}$ on $(0,1)$, $g(t)=t$, $d\nu=\frac{dt}{t}$ on $(0,\frac1x)$. Since the maximum of $t(1-t)^x$ is attained at $t=\frac{1}{x+1}$, we see that the function $s\mapsto \mu\{f>s\}$ drops to $0$ before the function $s\mapsto \nu\{g>s\}$. Also, the first function has larger in absolute value negative derivative than the second one for each value of $s$ where it is still positive. To see it, notice that the set where $f>s$ is an interval $(u,v)=(u(s),v(s))$ that shrinks as $s$ increases and the left end $u$ of this interval satisfies $$ du\left(\frac 1u-\frac x{1-u}\right)=\frac{ds}s\,, $$ so trivially $$ \frac{du}{u(1-u)}\ge \frac{du}u>\frac {ds}s $$ The right end moving to the left can only increase the decay speed. Finally, for $q=1$, the integrals are equal (which also shows that the graphs of the distribution functions must indeed intersect), so for $0<p\le 1$ (which plays the role of $\alpha$), we have the desired inequality.
One can also use Jensen's inequality. Let (for $\sigma>0$) $G_\sigma$ denote a random variable with $\Gamma(1,\sigma)$-distribution, i.e. having Lebesgue density $$f_\sigma(t)=\frac{t^{\sigma-1}}{\Gamma(\sigma)} e^{-t}\;1_{(0,\infty)}(t)\;,$$ then $\mathbb{E}(G_\sigma)=\sigma$. Since $\alpha\in (0,1)$ the functions $t\mapsto t^\alpha$ resp. $t\mapsto t^{1-\alpha}$ on $\mathbb{R}_+$ are concave. By Jensen's inequality $$\frac{\Gamma(\alpha+\alpha x)}{\Gamma(\alpha x)}=\mathbb{E}(G_{x\alpha}^\alpha)\leq \left(\mathbb{E}(G_{x\alpha})\right)^\alpha=(x\alpha)^{\alpha}$$
and $$\frac{1}{\Gamma(\alpha)}=\mathbb{E} G_\alpha^{1-\alpha}\leq\left(\mathbb{E}(G_{\alpha})\right)^{1-\alpha}=\frac{1}{\alpha^{\alpha-1}}$$ Using that gives $$B(\alpha,x \alpha)=\frac{\Gamma(\alpha)\,\Gamma(x\alpha)}{\Gamma(\alpha +x\alpha)}\geq \frac{\Gamma(\alpha)}{\alpha^\alpha x^\alpha}\geq \frac{\Gamma(\alpha)}{\alpha\,\Gamma(\alpha)\,x^\alpha}=\frac{1}{\alpha x^\alpha},$$ as desired.
This is an attempt to strengthen your claim.
If $x$ is large then $B(x,y)\sim \Gamma(y)x^{-y}$ and hence $$B(\alpha x,\alpha)\sim \Gamma(\alpha)(\alpha x)^{-\alpha};$$ where $\Gamma(z)$ is the Euler Gamma function.
On the other hand, for small $\alpha$, we have the expansion $$\Gamma(1+\alpha)=1+\alpha\Gamma'(1)+\mathcal{O}(\alpha^2).$$ Since $\alpha\Gamma(\alpha)=\Gamma(1+\alpha)$, it follows that $$\Gamma(\alpha)\sim \frac1{\alpha}-\gamma+\mathcal{O}(\alpha)$$ where $\gamma$ is the Euler constant.
We may now combine the above two estimates to obtain $$\alpha x^{\alpha}B(\alpha x,\alpha)\sim \alpha x^{\alpha}\left(\frac1{\alpha}-\gamma\right)(\alpha x)^{-\alpha}=\left(\frac1{\alpha}-\gamma\right)\alpha^{1-\alpha}\geq1$$ provided $\alpha$ is small enough. For example, $0<\alpha<\frac12$ works.