Is the Euler product formula always divergent for 0<Re(s)<1?

Let $$t_P = \sum_{p < P} \log \left| \frac{1}{1-p^{-s}} \right|$$ with $s=\sigma+it$, $\sigma \in (0,1)$ and $t$ a nonzero real. The point of this answer is to show that the $t_P$ jump around a great deal. Specifically, for any $M$ and $N$, there are $P$ and $Q$ with $N < P < Q$ such that $t_Q - t_P > M$, and other $P'$ and $Q'$ with $N < P' < Q'$ such that $t_{Q'} - t_{P'} < -M$

Thus $t_P$ cannot approach any finite limit. It could still approach $\pm \infty$; think of $\sum (-1)^n (3+(-1)^n)^n$, which has arbitrarily large increases and decreases, but does climb to $\infty$. However, this result still means you should be very suspicious of any numerical data which seems to indicate that $t_P$ has a definite trend: There is always enough future oscillation remaining to wipe out any gains you have made towards $\pm \infty$.

Obviously, this implies the analogous statements about $\prod \left| \frac{1}{1-p^{-s}} \right|$: It cannot approach a finite limit, and you should not trust numerical evidence that it is going to $0$ or $\infty$. And, of course, life is only more complicated if you keep track of the argument of the Euler product as well as its magnitude.


So, a proof. We will treat $\sigma$ and $t$ as completely fixed, so constants in $O$'s can depend on them.

Choose a small positive real $\delta$. This will be a once and for all choice, but I will record dependences on it explicitly, because I need to see that I can take a small enough choice to make everything work.

Let $(P,Q)$ be of the form $$(e^{(2 \pi k-\delta)/t}, e^{(2 \pi k+\delta)/t})$$ for some positive integer $k$. By choosing $k$ large, we can arrange that $P$ and $Q$ are larger than any required $N$.

For any prime $p$ in this range, $$|1-p^{-s}| = |1-p^{-\sigma} e^{i \theta}|$$ for some $\theta \in (2 \pi k - \delta, 2 \pi k + \delta)$. So this is $$1-p^{-\sigma}(1 + O(\delta^2))$$ and $$ \log \left| \frac{1}{1-p^{-s}} \right| = p^{-\sigma} (1+O(\delta^2))(1+O(p^{-\sigma}))$$ If $(P,Q)$ is large enough, the first error term dominates and $$t_Q - t_P \geq \sum_{e^{2 \pi k - \delta}/t < p < e^{2 \pi k + \delta}/t} p^{-\sigma}(1+O(\delta^2)) = \# \{p: e^{(2 \pi k - \delta)/t} < p < e^{(2 \pi k + \delta)/t} \} e^{-2 \pi k \sigma/t} (1+O(\delta)).$$ (The error term has changed because the new dominant error is approximating $e^{\delta \sigma/t}$ as $1+O(\delta)$.

By the prime number theorem, the number of primes in this range is $$\left( e^{(2 \pi k + \delta)/t} - e^{(2 \pi k - \delta)/t} \right) \frac{1}{2 \pi k/t} (1 + O(1/k)) = \frac{2 \delta e^{2 \pi k/t}}{(2 \pi k/t)} (1+O(\delta)+O(1/k)).$$

In short, we have bounded $t_Q - t_P$ below by $$\frac{\delta t e^{2 \pi k(1-\sigma)/t}}{2 \pi k}(1+O(\delta) + O(1/k)).$$ Assuming our initial choice of $\delta$ was small enough, and using $\sigma<1$, this goes to $\infty$.

Now, repeat the argument with $(P,Q) = (e^{((2k+1)\pi -\delta)/t}, e^{((2k+1)\pi +\delta)/t})$ to show that $t_Q - t_P$ can be arbitrarily negative as well.


I don't have a gut instinct for whether this sum goes to $- \infty$, goes to $\infty$, or oscillates indefinitely. However, it should be clear that this sum is very far from being the $\zeta$ function.


Here is a quick argument that $\prod_p(1-p^{-s})^{-1}$ is divergent for $\frac{1}{2}<\mathrm{Re}(s)<1$. Assume it is convergent (meaning it has a nonzero limit), then $\sum_p -\log(1-p^{-s})$ is also convergent. Using $-\log(1-p^{-s})=p^{-s}+O(p^{-2s})$ and $\mathrm{Re}(s)>\frac{1}{2}$ we see that $\sum_p p^{-s}$ is convergent. By a standard result (e.g. Montgomery-Vaughan: Multiplicative Number Theory I, Page 11, Theorem 1.1.) this would imply that $\sum_p p^{-s}$ is convergent for some real number $s<1$ which is false.

EDIT: This is a partial response to David Speyer's comment/question whether $\prod_p(1-p^{-s})^{-1}$ diverges to zero, diverges to $\infty$, or oscillates. On the real axis the product clearly diverges to zero by Mertens' theorem, so David really asks about the behavior of $$ \mathrm{Re}\sum_{p\leq N}\frac{1}{p^{\sigma+it}} = \sum_{p\leq N}\frac{\cos(t\log p)}{p^\sigma} $$ for $\frac{1}{2}<\sigma<1$ and $t\neq 0$: whether it diverges to $\pm\infty$ or it oscillates as $N\to\infty$. Let us assume the Riemann Hypothesis, then $$ \psi(x):=\sum_{n\leq x}\Lambda(n)=x+O(x^{1/2}\log^2 x). $$ Up to $O(1)$ error, the sum in question equals $$ \sum_{n\leq N}\frac{\cos(t\log n)}{n^\sigma\log n}\Lambda(N)=\int_{2-}^N\frac{\cos(t\log x)}{x^\sigma\log x}d\psi(x).$$ Using all the hypotheses it follows by two integrations by parts (in between we approximate $\psi(x)$ by $x$) that $$ \sum_{p\leq N}\frac{\cos(t\log p)}{p^\sigma} = \int_2^N\frac{\cos(t\log x)}{x^\sigma\log x}dx + O(1). $$ The right hand side is purely analytic, it has no reference to primes. It should be straightforward to prove that the $\limsup$ and $\liminf$ of the right hand side is $+\infty$ and $-\infty$, respectively, which would show that the Euler product oscillates: the absolute value of the partial products get arbitrary close to $0$ and $\infty$.


The simplest argument is that the domain of convergence of generalized Dirichlet series are always half planes. Since there is a pole at s=1, it can only converge for $Re(s) >1$.

But the question is still very interesting for two reasons. If you consider instead Dirichlet L-functions based on non-principal characters, then there is no pole at $s=1$ and the above argument does not apply. In recent work with Franca, we argued that the Euler product in fact converges for $Re(s)> 1/2$.
The other reason is that for Riemann zeta and other principal characters, a truncated Euler product can still be made sense of for $Re(s) > 1/2$. Furthermore, the Euler product formula is valid in the limit $Im(s) \to \infty$ without truncation in a manner that can be made precise. This work is on math.NT