Tilting the $d$-cube to vertically separate its vertices

Given a unit vector $u \in \mathbb R^d$, the "heights" of vertices of the $n$-cube where $u$ is regarded as the vertical direction are the sums of subsets of the entries of $u$. Thus the minimum separation is the minimum difference between the sums of two distinct subsets of these entries. If you take $$u = [1,2,\ldots,2^{d-1}]/\sqrt{1^2 + 2^2 + \ldots (2^{d-1})^2} = \sqrt{\dfrac{3}{4^d-1}}[1,2,\ldots,2^{d-1}]$$ you get uniform separation of $\sqrt{3/(4^d-1)}$.


Just a small additional observation: It is possible to have nonuniform separations that are locally optimal but not globally. By local I mean that small perturbations of the $u$ vector do not improve the minimum separation.

Here are some examples in $d=3$ and $d=4$. The leftmost solutions are Robert's uniform separation solutions. In the others, the vertical order of the corners is different. (Corners are here numbered as bit strings, so in $d=4$ the four neighbors of origin are 1,2,4,8. In Robert's solution the vertical order is then same as numerical order. Edges from origin to its neighbors shown in red.) Locally optimal solutions in d=3 Locally optimal solutions in d=4

This means that one cannot characterize the optimal solution by local conditions on derivatives; the problem seems to have a "combinatorial" nature. If one first fixes the vertical order of all $2^d$ corners, then maximizing the separation can be formulated as a quadratic optimization problem. The remaining question is then, what is the best vertical order. But naively trying all possilities (e.g. all permutations of $2^d$ corners) will run into trouble with $d>4$, even if one applies some obvious symmetry reductions (without loss of generality, assume that corner 0 is lowest, and that its $d$ neighbors are in increasing numerical order).

With $d=4$, with some symmetry reductions, one needs to try only 252 different vertical orders (I can provide more details if needed), and the corresponding QP programs find three essentially different solutions (that are optimal for that particular vertical order). These three solutions are shown above, and the best is Robert's solution, so with $d=4$ we know it is the best possible.

Perhaps there is an obvious geometric reason why the optimal separation has to be uniform?


Robert Israel's answer is best possible.

As pointed out there, the problem is equivalent to finding a vector in $\mathbb{R}^d$ with minimal separation $1$ among sums of subsets of entries, and with the least possible length ($\ell_2$ norm). The vector Robert constructed, after rescaling to make the separation $1$, is $v_d=[2^0,2^1,\dots,2^{d-1}]$. Clearly this has the smallest possible $\ell_1$ norm.

It turns out that it also has the smallest possible $\ell_2$ norm, but for $d>3$ not the smallest $\ell_\infty$ norm.

Proof. It is a straightforward consequence of these 2 facts:

  1. Lemma. The following equality holds in $\mathbb{Z}[x_1,\dots x_d]$:

$$\sum_{S,T\subseteq I_d}(x_S-x_T)^2=C_d \sum_{i=1}^d x_i^2$$

where $I_d=\{1,2,\dots d\}$, $\displaystyle x_S=\sum_{i\in S} x_i$ and the positive integer $C_d$ depends only on $d$.

  1. if $x=v_d$ then $\{x_S\}=\{0,1,2,\dots 2^d-1\}$ clearly has the tightest possible packing of values (among vectors with separation $1$, and uniquely up to permutations) and therefore must minimize the left hand side in the lemma.

Proof of Lemma. There are 16 equally frequent cases for the occurrence of $x_ix_j$ in $(x_S-x_T)^2$:

$$\begin{array}{@{}l|l|l|c|c@{}} \mbox{case} & \in S & \in T & x_S-x_T & x_ix_j \text{ coef in }\\ & & & & (x_S-x_T)^2\\ \hline \mbox{1} & \_ & \_ & \cdots & 0 \\ \mbox{2} & i & \_ & x_i\cdots & 0 \\ \mbox{3} & j & \_ & x_j\cdots & 0 \\ \mbox{4} & i,j & \_ & x_i+x_j\cdots & 2 \\ \mbox{5} & \_ & i & -x_i\cdots & 0 \\ \mbox{6} & i & i & \cdots & 0 \\ \mbox{7} & j & i & x_j-x_i\cdots & -2 \\ \mbox{8} & i,j & i & x_j\cdots & 0 \\ \mbox{9} & \_ & j & -x_j\cdots & 0 \\ \mbox{10} & i & j & x_i-x_j\cdots & -2 \\ \mbox{11} & j & j & \cdots & 0 \\ \mbox{12} & i,j & j & x_i\cdots & 0 \\ \mbox{13} & \_ & i,j & -x_i-x_j\cdots & 2 \\ \mbox{14} & i & i,j & -x_j\cdots & 0 \\ \mbox{15} & j & i,j & -x_i\cdots & 0 \\ \mbox{16} & i,j & i,j & \cdots & 0 \end{array}$$

the total coefficient of $x_ix_j$ over all the cases is 0. Therefore $\sum_{S,T\subseteq I_d}(x_S-x_T)^2$ only contains square terms and by symmetry must then be a multiple of $\sum_{i=1}^d x_i^2 \quad\blacksquare$

Regarding the fact that $v_d$ above is not minimal $\ell_\infty$-wise, counterexamples are provided via the Atkinson-Negro-Santoro sequence:

$d=4: [3,5,6,7]$

$d=5: [6,9,11,12,13]$

$d=6: [11,17,20,22,23,24]$

etc.