Algebraic independence of shifts of the Riemann zeta function
Hmm, it was more difficult than I expected to leverage universality to establish the claim. But one can proceed by probabilistic reasoning instead, basically exploiting the phase transition in the limiting distribution of the zeta function at the critical line. The proof I found used an unexpectedly high amount of firepower; perhaps there is a more elementary argument.
Assume for contradiction that there is a non-trivial polynomial relation $$ P( \zeta(s+z_1), \dots, \zeta(s+z_n) ) = 0$$ for all $s$ (excluding poles if desired) and some distinct $z_1,\dots,z_n$ (it is slightly more convenient to reverse the sign conventions from the original formuation). We can assume $n$ to be minimal amongst all such relations. By translating we may normalize so that $z_1,\dots,z_m$ lie on the critical line $\{ \mathrm{Re}(s) = 1/2\}$ for some $1 \leq m < n$ and $z_{m+1},\dots,z_n$ lie to the right $\{ \mathrm{Re}(s) > 1/2 \}$ of the line.
Let $T$ be a large number, let $t$ be a random number in $[0,T]$, and define the random variables $Z_1,\dots,Z_n$ by $Z_j := \zeta(z_j+it)$. Then we have the identity $$ P( Z_1,\dots,Z_n)=0$$ with probability $1$.
Now we use the following form of Selberg's central limit theorem: the random variables $$ (\frac{\log |Z_1|}{\sqrt{\frac{1}{2}\log\log T}}, \dots, \frac{\log |Z_m|}{\sqrt{\frac{1}{2}\log\log T}})$$ and $$ (Z_{m+1},\dots,Z_n)$$ jointly converge to a limiting distribution as $T \to \infty$, with the limiting distribution of first tuple a standard gaussian that is independent of the limiting distribution of the second tuple (which will be some moderately complicated but explicit law). (The usual form of Selberg's theorem covers the case $m=n=1$, but the same machinery gives the general case, see e.g., Laurincikas' book. The intuition here is that the first tuple is largely controlled by the random variables $p^{it}$ for medium-sized primes $1 \lll p \ll T^\varepsilon$, while the second tuple is largely controlled by the random variables $p^{it}$ for small primes $p=O(1)$. The proof of this central limit theorem is unfortunately a bit complicated; the simplest proof I know of is by Radziwill and Soundararajan.)
Now expand $P$ as $\sum_{a_1,\dots,a_m} Z_1^{a_1} \dots Z_m^{a_m} Q_{a_1,\dots,a_m}(Z_{m+1},\dots,Z_n)$ for various polynomials $Q_{a_1,\dots,a_m}$. Extract out a leading term $Z_1^{a_1} \dots Z_m^{a_m} Q_{a_1,\dots,a_m}(Z_{m+1},\dots,Z_n)$ (using say lex ordering on $a_1,\dots,a_m$). The Selberg central limit theorem then shows that $Q_{a_1,\dots,a_m}(Z_{m+1},\dots,Z_n)$ must converge in distribution to zero as $T \to \infty$ (as otherwise there is an asymptotically positive probability event that this term dominates all the other terms put together). The random variable $Q_{a_1,\dots,a_m}(Z_{m+1},\dots,Z_n)$ is a Dirichlet series $\sum_n \frac{c_n}{n^{it}}$ with square-summable coefficients $c_n$ (indeed the coefficients decay like $O(n^{-\sigma+o(1)})$ for some $\sigma>1/2$ by the divisor bound), so by the $L^2$ mean value theorem for such series the variance of this series is asymptotic to $\sum_n |c_n|^2$ (and one can also check that the fourth moment is bounded, again by the divisor bound), so by the Paley-Zygmund inequality we must have $\sum_n |c_n|^2=0$, thus by analytic continuation we obtain a non-trivial polynomial relation $Q_{a_1,\dots,a_m}(s+z_{m+1},\dots,s+z_n)=0$ with fewer variables than the original relation, contradicting the minimality of $n$.
$\zeta(s - z)$ has an Euler product $\prod_p \frac{1}{1 - p^{z-s}}$, and so a monomial $\prod_i \zeta(s - z_i)$ (with the $z_i$ not necessarily distinct) has an Euler product
$$\prod_i \zeta(s - z_i) = \prod_p \prod_i \frac{1}{1 - p^{z_i - s}}.$$
We want to show that these monomials are linearly independent. Now here's an observation: it's quite hard for Dirichlet series with Euler products to be linearly dependent. This is because any linear dependence must, by examining only the coefficients of $\frac{1}{p^{ks}}$ for each prime separately, be a linear dependence for every Euler factor separately, but also must be a linear dependence for all of the Euler factors multiplied together, and even for any subset of the Euler factors multiplied together.
In fact we can prove the following, passing from Dirichlet series to coefficients. If $S$ is a set of primes, write $\mathbb{N}_S$ for the set of positive integers divisible only by the primes in $S$, and write $\mathbb{N}_{-S}$ for the set of positive integers divisible only by the primes not in $S$.
Lemma: Let $f_0, \dots f_k : \mathbb{N} \to \mathbb{C}$ be multiplicative arithmetic functions which are
- essentially nonzero in the sense that for any finite set of primes $S$, $f_i(n) \neq 0$ for some $n \in \mathbb{N}_{-S}$, and
- essentially distinct in the sense that for any finite set of primes $S$, if $f_i(n) = f_j(n)$ for all $n \in \mathbb{N}_{-S}$ then $i = j$.
Then the functions $f_i$ are essentially linearly independent in the sense that for any finite set of primes $S$ they are linearly independent over $\mathbb{C}$ when restricted to $\mathbb{N}_{-S}$.
Proof. This ends up being a slight variant of the standard proof of linear independence of characters (which would apply directly if "multiplicative" were replaced by "completely multiplicative"). We induct on $k$. When $k = 0$ the result follows from the assumption that the $f_i$ are essentially nonzero. For general $k$, let $S$ be a finite set of primes and suppose by contradiction that we have a nontrivial linear dependence, which WLOG we take to be of the form
$$f_0(n) = \sum_{i=1}^k c_i f_i(n), n \in \mathbb{N}_{-S}.$$
Since $f_0$ is essentially nonzero this requires that at least one of the $c_i$ also be nonzero. Now, if $m, n \in \mathbb{N}_{-S}$ are positive integers such that $\gcd(m, n) = 1$, then on the one hand
$$f_0(mn) = \sum_{i=1}^k c_i f_i(mn) = \sum_{i=1}^k c_i f_i(m) f_i(n)$$
and on the other hand
$$f_0(mn) = f_0(m) f_0(n) = f_0(m) \sum_{i=1}^k c_i f_i(n) = \sum_{i=1}^k c_i f_0(m) f_i(n).$$
Subtracting gives
$$\sum_{i=1}^k c_i (f_0(m) - f_i(m)) f_i(n) = 0.$$
If $T$ is any finite set of primes, letting $m$ be any element of $\mathbb{N}_T \cap \mathbb{N}_{-S}$ (divisible by the primes in $T$ but not the primes in $S$) and letting $n$ range over $\mathbb{N}_{-(S \cup T)}$ gives, by the inductive hypothesis, that for each value of $m$ the above is a linear dependence of the $f_i$ which must be trivial, hence the coefficients $c_i (f_0(m) - f_i(m))$ must vanish for all $m \in \mathbb{N}_T \cap \mathbb{N}_{-S}$. (This bit of the argument is why we need the freedom to ignore finitely many primes.)
Since one of the $c_i$, say $c_j$, is nonzero, it follows that $f_0(m) = f_j(m)$ for all $m \in \mathbb{N}_T \cap \mathbb{N}_{-S}$, but since this is true independent of the choice of $T$, we in fact have $f_0(m) = f_j(m)$ for all $m \in \mathbb{N}_{-S}$, which contradicts essential distinctness. $\Box$
Now it suffices to check that the monomials $\prod_i \zeta(s - z_i)$ are essentially nonzero and essentially distinct. Essential distinctness is a bit less straightforward than I thought, since deleting finitely many factors from the Euler product of $\zeta(s - z_i)$ produces zeroes at $s = z_i$ which may cancel some of the poles from other factors. But it doesn't affect the order of the pole at $s = z_i + 1$, which is further to the right, so we can still consider the rightmost $z_i$'s and the corresponding poles. We get that if two monomials are essentially equal then the rightmost $z_i$'s which occur in each must match (with matching multiplicities) so we can factor these out and inductively conclude that all of the $z_i$ must match.
We should also get algebraic independence for a broader class of Dirichlet series (anything for which it's clear that we can still show essential distinctness), e.g. shifts of Dirichlet L-functions.