What is $\mathbb{E} [\max_{\sigma \in \{ \pm 1\}^n} \sigma^T Z \sigma]$ for a random Gaussian matrix $Z$?

This is related to the Sherrington-Kirkpatrick Spin Glass model. In arxiv.org/pdf/1412.0170.pdf, Dmitry Panchenko writes that $$\lim_{N\to\infty} \frac{1}{N} \mathbb{E}\big[\max_\sigma \frac{1}{\sqrt{N}} \sum_{i<j} g_{ij} \sigma_i \sigma_j \big] \approx 0.7633$$ with $g_{ij}$ iid standard normals. This is a consequence of the Parisi formula. I believe your desired asymptotic should then be $0.7633\sqrt{2}N^{3/2}+ o(N^{3/2})$.


This gives a lower bound of $\frac{4}{3\sqrt{\pi}}n^{3/2}+O(\sqrt{n})$ and an upper bound of $2n^{3/2}+O(\sqrt{n})$. Note: in the arguments below, I messed up the constants somewhere, so I'm not sure what the correct constants they should give is. The arguments should still work.

To get a lower bound, we can try to build $\sigma$ greedily. Choose $\sigma_1$ arbitrarily, and choose $\sigma_j$ depending on $\{\sigma_i,Z_{ij},Z_{ji}\mid i<j\}$. When we are choosing $\sigma_j$, we are trying to maximize $$ \sigma_j\sum_{i<j} \sigma_i(Z_{ij}+Z_{ji}). $$ Since all the $Z_{ij}$ and $Z_{ji}$ are iid $\mathcal N(0,1)$ independent of $\{\sigma_i\mid i<j\}$, we get that $\sum_{i<j} \sigma_i(Z_{ij}+Z_{ji})\sim \mathcal N(0,2(j-1))$, so by the right choice of $\sigma_j$, we can get $$ \mathbb E\left[\sigma_j\sum_{i<j} \sigma_i(Z_{ij}+Z_{ji})\right]=\sqrt{2(j-1)}\sqrt{2/\pi}=\frac{2}{\sqrt{\pi}}\sqrt{j-1} $$ where $\sqrt{2/\pi}=\mathbb E|x|$ for $x\sim \mathcal N(0,1)$. Finally, we get $$ \mathbb E[ \sigma^TZ\sigma]=E\left[\sum_{i=1}^n Z_{ii}\sigma_i^2+\sum_{j=1}^n \sigma_j\sum_{i<j} \sigma_i(Z_{ij}+Z_{ji})\right]=\frac{2}{\sqrt{\pi}}\sum_{j=1}^n \sqrt{j-1} $$ I'm sure there are more precise ways to evaluate the last sum, but just comparing it to Riemann sums from above and below, we can get $$ \sum_{j=1}^n \sqrt{j-1}=n^{3/2}\int_0^1\sqrt{x} dx+O(\sqrt{n})=\frac{2}{3}n^{3/2}+O(\sqrt{n}) $$ putting it all together, this greedy construction yields a lower bound of $$ \mathbb E[ \sigma^TZ\sigma]\ge \frac{4}{3\sqrt{\pi}}n^{3/2}+O(\sqrt{n}) $$ for well-chosen $\sigma$ (hopefully I didn't mess up the coefficient in front).

To get the upper bound, we note that for any fixed $\sigma$, we have that $\sigma^TZ\sigma\sim \mathcal N(0,n^2)$, so (this is not using optimal bounds on erf) $$ \mathbb P\left[\sigma^TZ\sigma>Cn^{3/2}\right]<\exp\left(-\frac{C^2}{2}n\right) $$ so, since there are only $2^n$ choices for $\sigma$, we have $$ \mathbb P\left[\max_\sigma \sigma^TZ\sigma>Cn^{3/2}\right]<\exp\left(\left(2-\frac{C^2}{2}\right)n\right) $$ Then we use $$ \mathbb E(\max_\sigma \sigma^TZ\sigma>Cn^{3/2})\le 2n^{3/2}+n^{3/2}\int_{2n^{3/2}}^\infty \mathbb P(\max_\sigma \sigma^TZ\sigma>s) ds\le n^{3/2}\left(2+\int_{2}^\infty \exp\left(\left(2-\frac{t^2}{2}\right)n\right) dtdt\right)=2n^{3/2}+O(\sqrt{n}). $$