A determinant inequality

Edit. I think now that your question concerns Gårding's theory of hyperbolic polynomials.

A homogeneous polynomial of degree $d$ in $N$ real variables is hyperbolic in the direction $\bf e$ if for every vector $X$, the roots of the polynomial $t\mapsto p(X+t{\bf e})$ are real. We may suppose that $p({\bf e})>0$. The connected component of $\bf e$ in $\{p>0\}$ is the forward cone ; it is convex. Actually, $p$ is convex in the direction of any vector of the future cone. Let us denote $\Gamma$ the closure of the forward cone. Gårding proved a reverse Hölder inequality, in terms of the polar form associated with $p$ : $$p(x_1)^{1/d}\cdots p(x_d)^{1/d}\le\phi(x_1,\ldots,x_d)$$ for every $x_1,\ldots,x_d\in\Gamma$. He found also that $p^{1/d}$ is concave over $\Gamma$. Finally, the derivative of $p$ in a forward direction provides a hyperbolic polynomial, whose forward cone contains (strictly, in general) that of $p$.

How does this apply here ? The map $\sigma_d:S\mapsto \det S$ is a hyperbolic polynomial over the symmetric matrices, in the direction of $I_d$ ; we have $N=\frac{d(d+1)}2$. This is just saying that every symmetric matrix has real eigenvalues. Its forward cone is that of positive definite matrices. When differentiating in the direction of $I_d$, one obtains (up to a constant factor) $\sigma_{d-1}$, next $\sigma_{d-2}$, etc ... Their closed forward cones $\Gamma_d$, $\Gamma_{d-1}$ ... are larger and larger ; in particular, they all contain ${\bf SPD}_d$.

As mentionned below, your inequality amounts to $$(A+D=B+C,\, D\le B,C\le A)\Longrightarrow(\sigma_k(A)\sigma_k(D)\le\sigma_k(B)\sigma_k(C)).$$ I suspect that something stronger holds true, that is, if $p$ is hyperbolic, with closed forward cone $\Gamma$, then for every vectors $A,X,Y\in\Gamma$, the vectors $A,B=A+X,C=A+Y$ and $D=A+X+Y$ satisfy $p(A)p(D)\le p(B)p(C)$. I point out that if $X,Y$ are colinear (that is $A,B,C,D$ are colinear), then this is true because of the concavity of $p^{1/d}$.

I was able to prove the claim when $k=2$, in which case $p$ can be written in the Lorentz form $p(s,x)=s^2-|x|^2$, in some appropriate coordinates $X=(s,x)$. The proof is somewhat cumbersome, here are the main arguments. The quantity $$F(s,t,a,x,y)=p(B)p(C)-p(A)p(D),\qquad A=(1,a),\,X=(s,x)\,,Y=(t,y)$$ is a concave function of $x$ and $y$. Let us fix $s,t>0$. When minimizing over $|x|=s$ and $|y|=t$, the constraints must be equalities. There remains to minimize with respect to $a$ in the unit ball. If $a$ is on the unit sphere, $F$ is trivially $\ge0$. Otherwise, a minimum should be reached when $\nabla_aF=0$. An interesting calculation shows that this minimum is precisely zero. I can write out the details if you wish.


Let me begin with two observations. On the one hand, the quantity $$\sum_{Q\in S(n,k)}\det(A_Q)=:\sigma_k(A)$$ is nothing but the $k$-th symmetric polynomial in the eigenvalues of $A$, whence my notation.

On the other hand, if the required inequality is true, then a recursive use of it gives at well the inequality $$({\bf I}_k)\qquad \sigma_k(A)\sigma_k(D)\le\sigma_k(B)\sigma_k(C)$$ whenever $A,B,C,D$, symmetric positive definite, obey the constraints $$({\bf C})\qquad A+D=B+C,\qquad D\le B,C\le A.$$

I claim that this inequality is true at least for $k=1$ and $k=n$ (I guess that it remains true for every $k$). When $k=1$, this is because $\sigma_1$ is the trace, and the constraints imply $${\rm Tr}\,A+{\rm Tr}\,D={\rm Tr}\,B+{\rm Tr}\,C,\qquad0<{\rm Tr}\,D\le{\rm Tr}\,B,{\rm Tr}\,C\le{\rm Tr}\,A.$$ And we know that $a+d=b+c$ and $0<d\le b,c\le a$ imply $ad\le bc$.

For $k=n$, we must prove $\det A\det D\le \det B\det C$. To proceed, let us define $$X=\frac12(A+D)=\frac12(B+C),\qquad T=X-B,\qquad S=X-D.$$ The constraints are that $X>0$ and $\pm T\le S\le X$. We want to prove $$\det(X+S)\det(X-S)\le\det(X-T)\det(X+T).$$ Multiplying every matrix at left and right by $X^{-1/2}$, and using the multiplicativity of the determinant, we may restrict to the case where $X=I_n$. There remains to prove $$(|T|\le S\le I_n)\Longrightarrow(\det(I_n-S^2)\le\det(I_n-T^2)),$$ where $|T|$, the absolute value, is given by functional calculus. Remark that because the right-hand side involves only $T^2$, which equals $|T|^2$, we may also assume that $0_n\le T$. Therefore, there remains to check the monotonicity of $F:T\mapsto\det(I_n-T^2)$ over $0_n\le T\le I_n$. To this end, we differentiate $$DF(T)\cdot H={\rm Tr}(\widehat{I_n-T^2}(HT+TH)),$$ where $\hat M$ is the adjugate of $M$. Up to a density argument, we may assume that $T<I_n$ and therefore $I_n-T^2$ is invertible. Then $DF(T)\cdot H={\rm Tr}(HQ)$ where $$Q=\det(I_n-T^2)\,T^{1/2}(I_n-T^2)^{-1}T^{-1/2}.$$ Because $Q\ge0_n$, the monotonicity holds true and the proof is complete.

Edit. I find embarassing that the constraints (C) are invariant under congruence $M\mapsto P^TMP$, whereas the inequalities (I$_k$) to prove are not, except for $k=n$.


As suspected, the desired inequality actually holds for all hyperbolic polynomials; the inequality in the OP follows as corollary (Corollary 1) to Theorem 2 (which seems to be new).

We will need the following remarkable theorem to obtain our result.

Theorem 1 (Bauschke, Güler, Lewis, Sendov, 2001) Let $p$ be a homogenous hyperbolic polynomial; let $v$ be a vector in the strict interior of the hyperbolicity cone $\Lambda_{++}$ of $p$. Then, \begin{equation*} g(x) := \frac{p(x)}{Dp(x)[v]} \end{equation*} is concave on $\Lambda_{++}$.

This theorem helps prove the more general inequality (also conjectured by Denis Serre above).

Theorem 2. Let $p$ be a homogenous hyperbolic polynomial with hyperbolicity cone $\Lambda_{++}$. Let $a, b, c \in \Lambda_{++}$. Then, $p$ satisfies the (conic log-submodularity) inequality: \begin{equation*} \tag{0} p(a)p(a+b+c) \le p(a+b)p(a+c). \end{equation*}

Proof. Let $c \in \Lambda_{++}$ be arbitrary. Consider the function $f(a) := \frac{p(a+c)}{p(a)}$. Inequality (0) amounts to showing that $f(a)$ is monotonically decreasing on the cone $\Lambda_{++}$. Equivalently, we consider $\log f$ and show that its derivative is negative in the direction $v$. That is, for an arbitrary direction vector $v\in \Lambda_{++}$, we show that \begin{equation} \tag{1} \frac{Dp(a+c)[v]}{p(a+c)} - \frac{Dp(a)[v]}{p(a)} \le 0\quad\Longleftrightarrow\quad \frac{p(a+c)}{Dp(a+c)[v]} \ge \frac{p(a)}{Dp(a)[v]}. \end{equation} But from Theorem 1, we know that $\frac{p(x)}{Dp(x)[v]}$ is concave. Moreover, since $p$ is homogenous, from its concavity we obtain its superadditivity \begin{equation*} \frac{p(a+c)}{Dp(a+c)[v]} \ge \frac{p(a)}{Dp(a)[v]} + \frac{p(c)}{Dp(c)[v]}, \end{equation*} which is stronger than the desired monotonicity inequality (1) (since all terms are nonnegative).

Corollary 1. Let $E_k(A) = e_k \circ \lambda(A)$ denote the $k$-th elementary symmetric polynomial of a positive definite matrix $A$. Then for any positive definite $A, B, C$ we have \begin{equation*} E_k(A)E_k(A+B+C) \le E_k(A+B)E_k(A+C). \end{equation*} This log-submodularity, immediately implies the log-submodularity of $f_k(S) := E_k(A+\sum\nolimits_{i\in S}v_iv_i^T)$.