The sum of squared logarithms conjecture
Is there anything wrong with the following argument?
First of all, by scaling all $x_i$ and all $y_i$ by a positive constant, we may safely assume that $\prod_i x_i = \prod_i y_i =1$.
The result would now follow from the following more general conjecture.
$\bf Conjecture:$ For ${\bf a}=(a_1,\ldots,a_{n-1})\in (\mathbb R_{>0})^{n-1}$ consider $h_{\bf a}(z) = z^n+a_{n-1}z^{n-1}+\cdots+a_1z+1$. Define the function $f:(\mathbb R_{>0})^{n-1}\to {\mathbb R}$ by $$ f(a_1,\ldots,a_{n-1}) = \sum_{z\vert h_{\bf a}(z)=0} (\log(-z))^2 $$ where the branch of $\log$ is picked as usual on the complement of ${\mathbb R}_{\leq 0}$ in $\mathbb C$. Then all partial derivatives $\frac{\partial f}{\partial a_k}$ are positive.
First, a few comments. The function $f$ is well-defined because none of the roots of $h_a(z)$ are positive reals. In addition, $f$ is real-valued, because roots of $h_a(z)$ come in complex conjugate pairs for which values of $\log(-z)$ are complex conjugates. The consequence of this conjecture is that if ${\bf a}\leq {\bf b}$ coordinate-wise, then $f({\bf a})\leq f({\bf b})$. Applied to the case when roots of $h_{\bf a}(z)$ and $h_{\bf b}(z)$ are real numbers $-x_i$ and $-y_i$, we get the original conjecture.
Now, let me describe what I think is the proof of this new conjecture.
First of all, as standard, I can write the function $f(\bf a)$ by a contour integral. In a neighborhood of fixed ${\bf a}$ for any large enough $R$ and small enough $\epsilon>0$ there holds $$ f({\bf a}) = \frac 1{2\pi i}\int_{C} (\log(-z))^2 \frac {h_{\bf a}'(z)}{h_{\bf a}(z)}\,dz $$ over the contour $C$ on the union of $\mathbb C$ with two copies of $\mathbb R_{>0}$ which I will call "north" and "south" shores (so that $\log(-t)=\log t -\pi i$ on the north shore and $\log(-t)=\log t +\pi i$ on the south shore). The contour $C$ is the union of the following four pieces $C_\epsilon$, $C_R$, $C_+$, $C_-$.
$C_\epsilon$ is a circle of radius $\epsilon$ around $z=0$ traveled from $\epsilon$-south to $\epsilon$-north clockwise.
$C_R$ is a circle of radius $R$ around $z=0$ traveled from $R$-north to $R$-south counterclockwise.
$C_+$ is the line segment $[\epsilon,R]$-north.
$C_-$ is the line segment $[R,\epsilon]$-south.
The derivative $\frac{\partial f({\bf a})}{\partial a_k}$ is the integral of the derivative, so we get: $$ \frac{\partial f({\bf a})}{\partial a_k}= \frac 1{2\pi i}\int_{C} (\log(-z))^2 \frac \partial{\partial a_k}\frac {h_{\bf a}'(z)}{h_{\bf a}(z)}\,dz $$ $$ = \frac 1{2\pi i}\int_{C} (\log(-z))^2 \Big(\frac {z^k} {h_{\bf a}(z)} \Big)'\,dz = -\frac 1{2\pi i}\int_{C} \Big((\log(-z))^2 \Big)'\frac {z^k} {h_{\bf a}(z)} \,dz $$ $$ =-\frac 1{\pi i}\int_{C} \log(-z)\frac {z^{k-1}} {h_{\bf a}(z)}\,dz = -\frac 1\pi {\rm Im}\Big(\int_{C} \log(-z)\frac {z^{k-1}} {h_{\bf a}(z)}\,dz\Big) $$ We can take a limit as $R\to +\infty$ and $\epsilon\to 0$. Since $k\leq {n-1}$ the integral over $C_R$ goes to zero (the length is $2\pi R$ and the size of the function is $O(R^{-2}\log R)$). The integral over $C_\epsilon$ also goes to zero, because the $k>=1$, so the function is $O(\log \epsilon)$ and the length is $2\pi\epsilon$.
So we get $$ \frac{\partial f({\bf a})}{\partial a_k} = -\lim_{\epsilon\to 0^+}\frac 1\pi {\rm Im}\Big( \int_{[\epsilon,+\infty]-{\rm north}\cup [+\infty,\epsilon]-{\rm south}} \log(-z)\frac {z^{k-1}} {h_{\bf a}(z)}\,dz\Big) $$ $$ =-\lim_{\epsilon\to 0^+}\frac 1\pi \int_{\epsilon}^{+\infty}{\rm Im}\Big( (\log(t)-\pi i)\frac {t^{k-1}} {h_{\bf a}(t)} -(\log(t)+\pi i)\frac {t^{k-1}} {h_{\bf a}(t)}\Big)\,dt $$ $$ =2\lim_{\epsilon \to 0^+} \int_{\epsilon}^{+\infty} \frac {t^{k-1}}{h_{\bf a}(t)}\,dt >0.$$ This finishes the proof of the new conjecture, and consequently of the old conjecture.
Remark: I am guessing that there is a simpler argument for $$ \frac{\partial f({\bf a})}{\partial a_k} = 2\int_{0}^{+\infty} \frac {t^{k-1}}{h_{\bf a}(t)}\,dt $$ but I am just writing the first thing that came to my mind.
Lev's proof reminded me of two papers, and unless I'm doing something silly, the said conjecture follows as a corollary of those papers. The answer below is just meant to supplement Lev's result, and he deserves the credit for coming up with an independent proof.
Consider the function \begin{equation*} D(x) := \sum\nolimits_j (\log x_j)^2. \end{equation*}
Let $e_k$ be the elementary symmetric polynomials in $x_i$, so that \begin{equation*} e_1 = \sum_i x_i,\quad e_k=\sum_{|I|=k}\prod_{i\in I}x_i,\quad e_n=\prod_{i}x_i. \end{equation*} Since $D(x)$ is symmetric, without loss of generality, we assume that the $x_i$ are arranged in increasing order, so that $0 < x_1\cdots < x_n$. Also, we assume wlog that $e_n(x)=1$.
Lemma.
\begin{equation*} \frac{\partial D}{\partial e_k} = (-1)^{k+1}\sum_{j=1}^n \frac{x_j^{n-k-1}\log x_j}{\prod_{i \neq j}(x_j-x_i)},\quad\text{for}\ 1\le k < n. \end{equation*} Proof: The $x_j$ are roots of the polynomial \begin{equation*} p(x) = x^d - e_1x^{n-1}+e_2x^{n-2}\cdots+(-1)^ne_n. \end{equation*} Implicitly differentiating the roots $x_j$'s as a function of the $e_k$ we have \begin{equation*} \frac{\partial x_j}{\partial e_k} = \frac{(-1)^{k+1}x_j^{n-k}}{\prod_{i\neq j}(x_j-x_i)}\qquad 1\le k < n. \end{equation*} Now use the chain rule \begin{equation*} \frac{\partial D}{\partial e_k} = \sum_{j=1}^n\frac{\partial D}{\partial x_j}\frac{\partial x_j}{\partial e_k}, \end{equation*} to obtain the identity \begin{equation*} \frac{\partial D}{\partial e_k} = (-1)^{k+1}2\sum_{j=1}^n\frac{x_j^{n-k-1}\log x_j}{\prod_{i\neq j}(x_j-x_i)}. \end{equation*} Writing $q = k+1$, we obtain for $2\le q < n$, \begin{equation*} \frac{\partial D}{\partial e_{k}} = 2(-1)^{q}\sum_{j=1}^n\frac{x_j^{n-q}\log x_j}{\prod_{i\neq j}(x_j-x_i)}, \end{equation*} which is known to be nonnegative (and $q\ge 2$ is important there)---see: Josza & Mitchison (2013); or even the older paper: Mitchison & Josza; (2003).
Using the above Lemma, we see that $D$ is monotonic in the $e_k$, which establishes the original conjecture under the hypothesis on $e_k(x) \le e_k(y)$ for all $x$ and $y$.
Moreover, using the cited 2013 paper we can probably prove much more, including multivariate complete monotonicity of $D$ of the form \begin{equation*} (-1)^{m}\frac{\partial^m D}{\partial e_{i_1}\partial e_{i_2}\ldots\partial e_{i_m}} \ge 0,\qquad m \ge 1. \end{equation*}
Lev Borisov having put up a complete solution; I'll put up how far I got. In his notation, I was able to show $\frac{\partial f(a)}{\partial a_k} > 0$ when $z^n + a_{n-1} z^{n-1} + \cdots + a_0$ has all real roots but couldn't figure out how to descend from one real rooted polynomial to another while keeping the roots real. I'm feeling annoyed with myself for not thinking harder about what would happen if I allowed complex roots. Just for the fun of it, here is how far I had gotten.