Matrix Linear Least Squares Problem with Diagonal Matrix Constraint

The Problem

Stating the problem in more general form:

$$ \arg \min_{S} f \left( S \right) = \arg \min_{S} \frac{1}{2} \left\| A S {B}^{T} - C \right\|_{F}^{2} $$

The derivative is given by:

$$ \frac{d}{d S} \frac{1}{2} \left\| A S {B}^{T} - C \right\|_{F}^{2} = A^{T} \left( A S {B}^{T} - C \right) B $$

Solution to General Form

The derivative vanishes at:

$$ \hat{S} = \left( {A}^{T} A \right)^{-1} {A}^{T} C B \left( {B}^{T} B \right)^{-1} $$

Solution with Diagonal Matrix

The set of diagonal matrices $ \mathcal{D} = \left\{ D \in \mathbb{R}^{m \times n} \mid D = \operatorname{diag} \left( D \right) \right\} $ is a convex set (Easy to prove by definition as any linear combination of diagonal matrices is diagonal).

Moreover, the projection of a given matrix $ Y \in \mathbb{R}^{m \times n} $ is easy:

$$ X = \operatorname{Proj}_{\mathcal{D}} \left( Y \right) = \operatorname{diag} \left( Y \right) $$

Namely, just zeroing all off diagonal elements of $ Y $.

Hence one could solve the above problem by Project Gradient Descent by projecting the solution of the iteration onto the set of diagonal matrices.

The Algorithms will be:

$$ \begin{align*} {S}^{k + 1} & = {S}^{k} - \alpha A^{T} \left( A {S}^{k} {B}^{T} - C \right) B \\ {S}^{k + 2} & = \operatorname{Proj}_{\mathcal{D}} \left( {S}^{k + 1} \right)\\ \end{align*} $$

The code:

mAA     = mA.' * mA;
mBB     = mB.' * mB;
mAyb    = mA.' * mC * mB;

mS          = mAA \ (mA.' * mC * mB) / mBB; %<! Initialization by the Least Squares Solution
vS          = diag(mS);
mS          = diag(vS);
vObjVal(1)  = hObjFun(vS);

for ii = 2:numIterations

    mG = (mAA * mS * mBB) - mAyb;
    mS = mS - (stepSize * mG);

    % Projection Step
    vS          = diag(mS);
    mS          = diag(vS);

    vObjVal(ii) = hObjFun(vS);
end

enter image description here

Solution with Diagonal Structure

The problem can be written as:

$$ \arg \min_{s} f \left( s \right) = \arg \min_{s} \frac{1}{2} \left\| A \operatorname{diag} \left( s \right) {B}^{T} - C \right\|_{F}^{2} = \arg \min_{s} \frac{1}{2} \left\| \sum_{i} {s}_{i} {a}_{i} {b}_{i}^{T} - C \right\|_{F}^{2} $$

Where $ {a}_{i} $ and $ {b}_{i} $ are the $ i $ -th column of $ A $ and $ B $ respectively. The term $ {s}_{i} $ is the $ i $ -th element of the vector $ s $.

The derivative is given by:

$$ \frac{d}{d {s}_{j}} f \left( s \right) = {a}_{j}^{T} \left( \sum_{i} {s}_{i} {a}_{i} {b}_{i}^{T} - C \right) {b}_{j} $$

Note to Readers: If you know how vectorize this structure, namely write the derivative where the output is a vector of the same size as $ s $ please add it.

By vanishing it or using Gradient Descent one could find the optimal solution.

The code:

mS          = mAA \ (mA.' * mC * mB) / mBB; %<! Initialization by the Least Squares Solution
vS          = diag(mS);
vObjVal(1)  = hObjFun(vS);

vG = zeros([numColsA, 1]);

for ii = 2:numIterations

    for jj = 1:numColsA
        vG(jj) = mA(:, jj).' * ((mA * diag(vS) * mB.') - mC) * mB(:, jj);
    end

    vS = vS - (stepSize * vG);

    vObjVal(ii) = hObjFun(vS);
end

enter image description here

Remark
The direct solution can be achieved by:

$$ {s}_{j} = \frac{ {a}_{j}^{T} C {b}_{j} - {a}_{j}^{T} \left( \sum_{i \neq j} {s}_{i} {a}_{i} {b}_{i}^{T} - C \right) {b}_{j} }{ { \left\| {a}_{j} \right\| }_{2}^{2} { \left\| {b}_{j} \right\| }_{2}^{2} } $$

Summary

Both methods works and converge to the optimal value (Validated against CVX) as the problem above are Convex.

The full MATLAB code with CVX validation is available in my StackExchnage Mathematics Q2421545 GitHub Repository.


The closed-form solution to this problem is $$S = {\rm Diag}\bigg(\Big(I\odot U^TX^TXU\Big)^{-1}{\rm diag}\Big(U^TX^TYV\Big)\bigg) \\ $$ which was derived as follows.


For typing convenience, define the matrices $$\eqalign{ S &= {\rm Diag}(s) \\ A &= XUSV^T-Y \\ }$$ Write the problem in terms of these new variables, then calculate its gradient. $$\eqalign{ \phi &= \|A\|^2_F = A:A \\ d\phi &= 2A:dA \\ &= 2A:XU\,dS\,V^T \\ &= 2U^TX^TAV:dS \\ &= 2U^TX^TAV:{\rm Diag}(ds) \\ &= 2\,{\rm diag}\Big(U^TX^TAV\Big):ds \\ &= 2\,{\rm diag}\Big(U^TX^T(XUSV^T-Y)V\Big):ds \\ \frac{\partial\phi}{\partial s} &= 2\,{\rm diag}\Big(U^TX^T(XUSV^T-Y)V\Big) \\ }$$ Set the gradient to zero and solve for the optimal vector. $$\eqalign{ {\rm diag}\Big(U^TX^TYV\Big) &= {\rm diag}\Big(U^TX^TXU\;{\rm Diag}(s)\;V^TV\Big) \\ &= \Big(V^TV\odot U^TX^TXU\Big)\,s \\ &= \Big(I\odot U^TX^TXU\Big)\,s \\ s &= \Big(I\odot U^TX^TXU\Big)^{-1}{\rm diag}\Big(U^TX^TYV\Big) \\ S &= {\rm Diag}\bigg(\Big(I\odot U^TX^TXU\Big)^{-1}{\rm diag}\Big(U^TX^TYV\Big)\bigg) \\ }$$ In some of the steps above, the symbol $(\odot)$ denotes the elementwise/Hadamard product and $(:)$ denotes the trace/Frobenius product, i.e. $$\eqalign{ A:B = {\rm Tr}(A^TB)}$$ Finally, the ${\rm diag}()$ function returns the main diagonal of a matrix as a column vector, while the ${\rm Diag}()$ function creates a diagonal matrix from its vector argument.