Expected absolute difference between two iid variables
Throughout this answer, we will fix $\alpha \in (0, 1]$.
Let $\mathcal{M}$ denote the set of all finite signed Borel measures on $[0, 1]$ and $\mathcal{P} \subset \mathcal{M}$ denote the set of all Borel probability measure on $[0, 1]$. Also, define the pairing $\langle \cdot, \cdot \rangle$ on $\mathcal{M}$ by
$$ \forall \mu, \nu \in \mathcal{M}: \qquad \langle \mu, \nu\rangle = \int_{[0,1]^2} |x - y|^{\alpha} \, \mu(dx)\nu(dy). $$
We also write $I(\mu) = \langle \mu, \mu\rangle$. Then we prove the following claim.
Proposition. If $\mu \in \mathcal{P}$ satisfies $\langle \mu, \delta_{t} \rangle = \langle \mu, \delta_{s} \rangle$ for all $s, t \in [0, 1]$, then $$I(\mu) = \max\{ I(\nu) : \nu \in \mathcal{P}\}.$$
We defer the proof of the lemma to the end and first rejoice its consequence.
When $\alpha = 1$, it is easy to see that the choice $\mu_1 = \frac{1}{2}(\delta_0 + \delta_1)$ works.
When $\alpha \in (0, 1)$, we can focus on $\mu_{\alpha}(dx) = f_{\alpha}(x) \, dx$ where $f_{\alpha}$ is given by
$$ f_{\alpha}(x) = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})} \cdot \frac{1}{(x(1-x))^{\frac{1+\alpha}{2}}}, $$
Indeed, for $y \in [0, 1]$, apply the substitution $x = \cos^2(\theta/2)$ and $k = 2y-1$ to write
$$ \langle \mu_{\alpha}, \delta_y \rangle = \int_{0}^{1} |y - x|^{\alpha} f_{\alpha}(x) \, dx = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \left| \frac{\sin\theta - k}{\cos\theta} \right|^{\alpha} \, d\theta. $$
Then letting $\omega(t) = \operatorname{Leb}\left( \theta \in \left(-\frac{\pi}{2}, \frac{\pi}{2}\right) \ : \ \left| \frac{\sin\theta - k}{\cos\theta} \right| > t \right)$, we can check that this satisfies $\omega(t) = \pi - 2\arctan(t)$, which is independent of $k$ (and hence of $y$). Moreover,
$$ \langle \mu_{\alpha}, \delta_y \rangle = \frac{1}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})} \int_{0}^{\infty} \frac{2t^{\alpha}}{1+t^2} \, dt = \frac{\pi}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})\cos(\frac{\pi\alpha}{2})} $$
Integrating both sides over $\mu(dy)$, we know that this is also the value of $I(\mu_{\alpha})$.
So it follows that
\begin{align*} &\max \{ \mathbb{E} [ |X - Y|^{\alpha}] : X, Y \text{ i.i.d. and } \mathbb{P}(X \in [0, 1]) = 1 \} \\ &\hspace{1.5em} = \max_{\mu \in \mathcal{P}} I(\mu) = I(\mu_{\alpha}) = \frac{\pi}{\operatorname{B}(\frac{1-\alpha}{2},\frac{1-\alpha}{2})\cos(\frac{\pi\alpha}{2})}. \end{align*}
Notice that this also matches the numerical value of Kevin Costello as
$$ I(\mu_{1/2}) = \frac{\sqrt{2}\pi^{3/2}}{\Gamma\left(\frac{1}{4}\right)^2} \approx 0.59907011736779610372\cdots. $$
The following is the graph of $\alpha \mapsto I(\mu_{\alpha})$.
$\hspace{8em} $
Proof of Proposition. We first prove the following lemma.
Lemma. If $\mu \in \mathcal{M}$ satisfies $\mu([0,1]) = 0$, then we have $I(\mu) \leq 0$.
Indeed, notice that there exists a constant $c > 0$ for which
$$ \forall x \in \mathbb{R}: \qquad |x|^{\alpha} = c\int_{0}^{\infty} \frac{1 - \cos (xt)}{t^{1+\alpha}} \, dt $$
holds. Indeed, this easily follows from the integrability of the integral and the substitution $|x|t \mapsto t$. So by the Tonelli's theorem, for any positive $\mu, \nu \in \mathcal{M}$,
\begin{align*} \langle \mu, \nu \rangle &= c\int_{0}^{\infty} \int_{[0,1]^2} \frac{1 - \cos ((x - y)t)}{t^{1+\alpha}} \, \mu(dx)\nu(dy)dt \\ &= c\int_{0}^{\infty} \frac{\hat{\mu}(0)\hat{\nu}(0) - \operatorname{Re}( \hat{\mu}(t)\overline{\hat{\nu}(t)} )}{t^{1+\alpha}} \, dt, \end{align*}
where $\hat{\mu}(t) = \int_{[0,1]} e^{itx} \, \mu(dx)$ is the Fourier transform of $\mu$. In particular, this shows that the right-hand side is integrable. So by linearity this relation extends to all pairs of $\mu, \nu$ in $\mathcal{M}$. So, if $\mu \in \mathcal{M}$ satisfies $\mu([0,1]) = 0$ then $\hat{\mu}(0) = 0$ and thus
$$ I(\mu) = -c\int_{0}^{\infty} \frac{|\hat{\mu}(t)|^2}{t^{1+\alpha}} \, dt \leq 0, $$
completing the proof of Lemma. ////
Let us return to the original proof. Let $m$ denote the common values of $\langle \mu, \delta_t\rangle$ for $t \in [0, 1]$. Then for any $\nu \in \mathcal{P}$
$$ \langle \mu, \nu \rangle = \int \left( \int_{[0,1]} |x - y|^{\alpha} \, \mu(dx) \right) \, \nu(dy) = \int \langle \mu, \delta_y \rangle \, \nu(dy) = m. $$
So it follows that
$$ \forall \nu \in \mathcal{P} : \qquad I(\nu) = I(\mu) + 2\underbrace{\langle \mu, \nu - \mu \rangle}_{=m-m = 0} + \underbrace{I(\nu - \mu)}_{\leq 0} \leq I(\mu) $$
as desired.
This isn't a full solution, but it's too long for a comment.
For fixed $0<\alpha<1$ we can get an approximate solution by considering the problem discretized to distributions that only take on values of the form $\frac{k}{n}$ for some reasonably large $n$. Then the problem becomes equivalent to $$\max_x x^T A x$$ where $A$ is the $(n+1) \times (n+1)$ matrix whose $(i,j)$ entry is $\left(\frac{|i-j|}{n}\right)^{\alpha}$, and the maximum is taken over all non-negative vectors summing to $1$.
If we further assume that there is a maximum where all entries of $x$ are non-zero, Lagrange multipliers implies that the optimal $x$ in this case is a solution to $$Ax=\lambda {\mathbb 1_{n+1}}$$ (where $1_{n+1}$ is the all ones vector), so we can just take $A^{-1} \mathbb{1_{n+1}}$ and rescale.
For $n=1000$ and $n=\frac{1}{2}$, this gives a maximum of approximately $0.5990$, with a vector whose first few entries are $(0.07382, 0.02756, 0.01603, 0.01143)$.
If the optimal $x$ has a density $f(x)$ that's positive everywhere, and we want to maximize $\int_0^1 \int_0^1 f(x) f(y) |x-y|^{\alpha}$ the density "should" (by analogue to the above, which can probably be made rigorous) satisfy $$\int_{0}^1 f(y) |x-y|^{\alpha} \, dy= \textrm{ constant independent of } x,$$ but I'm not familiar enough with integral transforms to know if there's a standard way of inverting this.