Expected supremum of average?
I'll extend my comment about the trivial bounded case to the more interesting unbounded case. Let $Y_n = \sup_{1 \leq k \leq n}{\frac{1}{k} \sum_{i=1}^{k}{X_i} }$.
Let's only assume that $X$ has finite variance $\sigma^2$. Using a peeling argument and Kolmogorov's inequality:
For a given $n$, let $M$ the smallest integer such that $n \leq 2^M$. We have $Y_n \leq Y_{2^M}$, so: \begin{align*} \mathbb{P}(Y_n \geq \varepsilon) & \leq \mathbb{P}(Y_{2^M} \geq \varepsilon) \\ & \leq \sum_{m=0}^{M-1}{\mathbb{P}\left( \sup_{2^m \leq k \leq 2^{m+1}}{\frac{1}{k} \sum_{i=1}^{k}{X_i} } \geq \varepsilon \right)} \\ & \leq \sum_{m=0}^{M-1}{\mathbb{P}\left( \sup_{2^m \leq k \leq 2^{m+1}}{ \sum_{i=1}^{k}{X_i} } \geq 2^m \varepsilon \right)} \\ %& \leq \sum_{m=0}^{M-1}{\mathbb{P}\left( \sup_{1 \leq k \leq 2^{m+1}}{ \sum_{i=1}^{k}{X_i} } \geq 2^m \varepsilon \right)} \\ & \leq \sum_{m=0}^{M-1}{\frac{2^{m}}{(2^m)^2} \frac{\sigma^2}{\varepsilon^2}} \\ & \leq \frac{\sigma^2}{\varepsilon^2} \sum_{m=0}^{M-1}{\frac{1}{2^m}} \\ & \leq 2 \frac{\sigma^2}{\varepsilon^2} \\ \end{align*} Going back to the expected value: \begin{align*} \mathbb{E}\left[ Y_n \right] & = \mathbb{E}\left[ Y_n {1}_{\left\lbrace Y_n \leq 0 \right\rbrace} \right] + \mathbb{E}\left[ Y_n {1}_{\left\lbrace Y_n > 0 \right\rbrace} \right] \\ & \leq \mathbb{E}\left[ Y_n {1}_{\left\lbrace Y_n > 0 \right\rbrace} \right] \\ \end{align*} $Y_n {1}_{Y_n > 0}$ is a non-negative random variable, so writing $F_{n}$ its c.d.f, we have: \begin{align*} \mathbb{E}\left[ Y_n {1}_{Y_n > 0} \right] & = \int_{0}^{+\infty}{(1-F_{n}(t)) dt} \\ & = \int_{0}^{+\infty}{\mathbb{P}(Y_n {1}_{\left\lbrace Y_n > 0 \right\rbrace}>t) dt} \\ & = \int_{0}^{+\infty}{\mathbb{P}(Y_n >t) dt} \\ & = \int_{0}^{a}{\mathbb{P}(Y_n > t) dt} + \int_{a}^{+\infty}{\mathbb{P}(Y_n > t) dt} \\ & \leq \int_{0}^{a}{1 dt} + \int_{a}^{+\infty}{ \frac{2 \sigma^2}{t^2} dt} \\ & \leq a + 4 \frac{\sigma^2}{a} \end{align*} for any positive $a$. Taking $a = \sqrt{2} \sigma$ to minimize the bound: $$\mathbb{E}\left[ Y_n \right] \leq 2 \sqrt{2} \sigma$$ If I didn't do too many calculation errors, the result is unexpectedly nice!
So while I was only trying to prove that $\mathbb{E}\left[ Y_n \right]$ is still bounded when $X$ isn't, without any hope of obtaining anything explicit, it turns out we can actually obtain a very nice bound! Therefore, we should be able to do even better in the bounded case. I guess that in this case one doesn't absolutely need a peeling argument, and using an union bound and Hoeffding's inequality is enough, but still one should obtain better results by using again a peeling argument (with different intervals ?) and Azuma-Hoeffding's inequality instead of Kolmogorov's one.
You're asking about maximal inequalities. These are known in more generality for measure-preserving transformations. As has already been pointed out, in your special case, you can expect to get a constant bound. The averages very quickly approach the limit, so you're looking at the average of the max of the first few terms together with the limiting value.
In general, for measure-preserving transformations, if the $X_k$ are just $L^1$ (even in the iid case), the expectation of the supremum can diverge logarithmically. If the $X_k$ have some higher moments: $L^p$ for any $p>1$, then you get the bound $\|M((X_k))\|_p \le C_p \|X_1\|_p$, where $C_p$ is some constant with a $p-1$ in the denominator.
Here is an "arbirarily nice" example with closed form results.
Let $X_1,X_2,\ldots$ be i.i.d. real random variables with partial sums $S_k:=\sum_{i=1}^kX_i$ and let $M_n:=\sup_{k\leq n} \frac{S_k}{k}$, $R_n:=\inf_{k\leq n} \frac{S_k}{k}= - \sup_{k\leq n} \frac{-S_k}{k}$, and let $M_\infty,R_\infty$ be the all-time supremum/infimum (possibly $\infty/-\infty$).
(1) The distribution of $M_n$ resp. $R_n$ can be obtained as follows.
For $a\in \mathbb{R}$ let
$$T_a:=\inf\{k\geq 1\;:\;S_k-ka>0\} \mbox{ and } U_a:=\inf\{k\geq 1\;:\;S_k-ka\leq 0\}$$
be first strictly ascendending resp. weakly descending ladder epochs of the random walk generated by the steps $X_i-a$, then clearly
$$\{M_n > a\}=\{ T_a \leq n\}\;\mbox{ and }\; \{R_n > a\}=\{ U_a > n\}\;\;, $$
The fluctuation theory of random walks (see e.g. chapter XII in Feller II (1971)) gives for the generating functions
$g_a(z):= \mathbb{E}(z^{T_a}),\, h_a(z):=\mathbb{E}(z^{U_a}) $ that
$$\log\left(\frac{1}{1-g_a(z)}\right)=\sum_{n=1}^\infty \frac{z^n}{n} \mathbb{P}(S_n>na)\;\;\mbox{ (Sparre Andersen's theorem, Thm 1 in XII.7)}$$
$$\mbox{ and } (1-g_a(z))(1-h_a(z))=1-z\;\;\mbox{ (duality, Thm 4 in XII.7).}$$
Thus $\mathbb{P}(M_n>a)$ resp. $\mathbb{P}(R_n>a)$ can (in principle) be computed when only the probabilities $\mathbb{P}(S_k>ka), k=1,\ldots,n$ are known.
(2) Now let the $X_i$ be $\exp(1)$-distributed. Then clearly $M_n, R_n\geq 0$ so only $a\geq 0$ need to be considered. Here it is known that for $a\geq 0$ $$\mathbb{P}(T_a=n) = \frac{n^{n-1}}{n!} a^{n-1}e^{-an} \;\;\;\mbox{ for } n\geq 1, \mbox{ resp. that }$$ $$g_a(z)=z\,e^{-a+T(zae^{-a})} \;\;\;\;(*)$$ where $T(z):=\sum_{n=1}^\infty \frac{n^{n-1}}{n!} z^n$ denotes the "tree function". $(*)$ can e.g. be proved using Sparre Andersen's thm. and Lagrange inversion (the series defining $T$ converges for $|z|\leq e^{-1}$ and represents the inverse of $z\mapsto z e^{-z}$ there). Thus $$\mathbb{P}( M_n>a) = \sum_{k=1}^n \frac{k^{k-1}}{k!} a^{k-1}e^{-ak}\;\;\mbox{ and }$$ $$\mathbb{E} (M_n) = \int_0^\infty \mathbb{P} (M_n>a)\,da=\sum_{k=1}^n \frac{1}{k^2}$$
Let $[z^n] F(z)$ denote the $n$-th cofficient of a (formal) power series $F(z)$. We have \begin{align*} \mathbb{P}(R_n >a) &=1-\mathbb{P}(U_a\leq n)\\ &=[z^n]\frac{1-h_a(z)}{1-z}\\ &=[z^n]\frac{1}{1-g_a(z)}\end{align*} where in the last step the duality relation was used.
Expanding $\frac{1}{1-g_a(z)}$ into a geometric series and using Lagrange inversion on the individual terms now gives $$\mathbb{P}( R_n>a) = \left(\sum_{k=0}^n \frac{(n-k)}{n}\frac{(na)^k}{k!}\right)e^{-na}\;.\;\mbox{ Thus }$$ $$\mathbb{E} (R_n) = \int_0^\infty \mathbb{P} (R_n>a)\,da=\frac{1}{n^2}\sum_{k=0}^n (n-k)=\frac{1}{2}+\frac{1}{2n}$$
(3) Passing to the limit gives that $\mathbb{P}(M_\infty >a)=e^{-a+T(ae^{-a})}$ (note that $\mathbb{P}(M_\infty > a)=1 \mbox{ for } a\in[0,1]$), and $$\mathbb{E} (M_\infty) = \int_0^\infty e^{-a+T(ae^{-a})}\,da=\zeta(2)=\frac{\pi^2}{6}$$ and that $R_\infty $ is uniformly distributed on $[0,1]$:
we have that $$\mathbb{P}( R_n >a) =\mathbb{P}(\mathrm{Poiss}(na)\leq n) -a\,\mathbb{P}(\mathrm{Poiss}(na)\leq n-1)$$ By the strong law of large numbers for sums of i.i.d. $\mathrm{Poiss}(a)$-variables, both $\mathbb{P}(\mathrm{Poiss}(na)\leq n)$ and $\mathbb{P}(\mathrm{Poiss}(na)\leq n-1)$ tend to $0$ if $a>1$ and to $1$ if $0<a<1$. Further $\mathbb{P}( R_n >1)=\mathbb{P}(\mathrm{Poiss}(n)=n)$ tends to zero (e.g. by Stirling's formula). Thus for $a>0$ $$ \mathbb{P}(R_\infty>a)= 1-\min(a,1)\;\;,$$ that is, $R_\infty$ is uniformly distributed on $(0,1)$.
Further $$\mathbb{E} (R_\infty) = \frac{1}{2}$$