Proximal Operator and the Derivative of the Matrix Nuclear Norm
As I said in my comment, in a convex optimization setting, one would normally not use the derivative/subgradient of the nuclear norm function. It is, after all, nondifferentiable, and as such cannot be used in standard descent approaches (though I suspect some people have probably applied semismooth methods to it).
Here are two alternate approaches for "handling" the nuclear norm.
Semidefinite programming. We can use the following identity: the nuclear norm inequality $\|X\|_*\leq y$ is satisfied if and only if there exist symmetric matrices $W_1$, $W_2$ satisfying
$$\begin{bmatrix} W_1 & X \\ X^T & W_2 \end{bmatrix} \succeq 0, ~ \mathop{\textrm{Tr}}W_1 + \mathop{\textrm{Tr}}W_2 \leq 2 y$$
Here, $\succeq 0$ should be interpreted to mean that the $2\times 2$ block matrix is positive semidefinite. Because of this transformation, you can handle nuclear norm minimization or upper bounds on the nuclear norm in any semidefinite programming setting.
For instance, given some equality constraints $\mathcal{A}(X)=b$ where $\mathcal{A}$ is a linear operator, you could do this:
$$\begin{array}{ll}
\text{minimize} & \|X\|_* \\
\text{subject to} & \mathcal{A}(X)=b \end{array}
\quad\Longleftrightarrow\quad
\begin{array}{ll}
\text{minimize} & \tfrac{1}{2}\left( \mathop{\textrm{Tr}}W_1 + \mathop{\textrm{Tr}}W_2 \right) \\
\text{subject to} & \begin{bmatrix} W_1 & X \\ X^T & W_2 \end{bmatrix} \succeq 0 \\ & \mathcal{A}(X)=b \end{array}
$$
My software CVX uses this transformation to implement the function norm_nuc
, but any semidefinite programming software can handle this. One downside to this method is that semidefinite programming can be expensive; and if $m\ll n$ or $n\ll m$, that expense is exacerbated, since size of the linear matrix inequality is $(m+n)\times (m+n)$.
Projected/proximal gradients. Consider the following related problems: $$\begin{array}{ll} \text{minimize} & \|\mathcal{A}(X)-b\|_2^2 \\ \text{subject to} & \|X\|_*\leq \delta \end{array} \quad $$ $$\text{minimize} ~~ \|\mathcal{A}(X)-b\|_2^2+\lambda\|X\|_*$$ Both of these problems trace out tradeoff curves: as $\delta$ or $\lambda$ is varied, you generate a tradeoff between $\|\mathcal{A}(X)-b\|$ and $\|X\|_*$. In a very real sense, these problems are equivalent: for a fixed value of $\delta$, there is going to be a corresponding value of $\lambda$ that yields the exact same value of $X$ (at least on the interior of the tradeoff curve). So it is worth considering these problems together.
The first of these problems can be solved using a projected gradient approach. This approach alternates between gradient steps on the smooth objective and projections back onto the feasible set $\|X\|_*\leq \delta$. The projection step requires being able to compute $$\mathop{\textrm{Proj}}(Y) = \mathop{\textrm{arg min}}_{\{X\,|\,\|X\|_*\leq\delta\}} \| X - Y \|$$ which can be done at about the cost of a single SVD plus some $O(n)$ operations.
The second model can be solved using a proximal gradient approach, which is very closely related to projected gradients. In this case, you alternate between taking gradient steps on the smooth portion, followed by an evaluation of the proximal function $$\mathop{\textrm{Prox}}(Y) = \mathop{\textrm{arg min}}_X \|X\|_* + \tfrac{1}{2}t^{-1}\|X-Y\|^2$$ where $t$ is a step size. This function can also be computed with a single SVD and some thresholding. It's actually easier to implement than the projection. For that reason, the proximal model is preferable to the projection model. When you have the choice, solve the easier model!
I would encourage you to do a literature search on proximal gradient methods, and nuclear norm problems in particular. There is actually quite a bit of work out there on this. For example, these lecture notes by Laurent El Ghaoui at Berkeley talk about the proximal gradient method and introduce the prox function for nuclear norms. My software TFOCS includes both the nuclear norm projection and the prox function. You do not have to use this software, but you could look at the implementations of prox_nuclear
and proj_nuclear
for some hints.
Start with the SVD decomposition of $x$:
$$x=U\Sigma V^T$$
Then $$\|x\|_*=tr(\sqrt{x^Tx})=tr(\sqrt{(U\Sigma V^T)^T(U\Sigma V^T)})$$
$$\Rightarrow \|x\|_*=tr(\sqrt{V\Sigma U^T U\Sigma V^T})=tr(\sqrt{V\Sigma^2V^T})$$
By circularity of trace:
$$\Rightarrow \|x\|_*=tr(\sqrt{V^TV\Sigma^2})=tr(\sqrt{V^TV\Sigma^2})=tr(\sqrt{\Sigma^2})=tr(\Sigma)$$
Since the elements of $\Sigma$ are non-negative.
Therefore nuclear norm can be also defined as the sum of the absolute values of the singular value decomposition of the input matrix.
Now, note that the absolute value function is not differentiable on every point in its domain, but you can find a subgradient.
$$\frac{\partial \|x\|_*}{\partial x}=\frac{\partial tr(\Sigma)}{\partial x}=\frac{ tr(\partial\Sigma)}{\partial x}$$
You should find $\partial\Sigma$. Since $\Sigma$ is diagonal, the subdifferential set of $\Sigma$ is: $\partial\Sigma=\Sigma\Sigma^{-1}\partial\Sigma$, now we have:
$$\frac{\partial \|x\|_*}{\partial x}=\frac{ tr(\Sigma\Sigma^{-1}\partial\Sigma)}{\partial x}$$ (I)
So we should find $\partial\Sigma$.
$x=U\Sigma V^T$, therefore: $$\partial x=\partial U\Sigma V^T+U\partial\Sigma V^T+U\Sigma\partial V^T$$
Therefore:
$$U\partial\Sigma V^T=\partial x-\partial U\Sigma V^T-U\Sigma\partial V^T$$
$$\Rightarrow U^TU\partial\Sigma V^TV=U^T\partial xV-U^T\partial U\Sigma V^TV-U^TU\Sigma\partial V^TV$$
$$\Rightarrow \partial\Sigma =U^T\partial xV-U^T\partial U\Sigma - \Sigma\partial V^TV$$
\begin{align} \Rightarrow\\ tr(\partial\Sigma) &=& tr(U^T\partial xV-U^T\partial U\Sigma - \Sigma\partial V^TV)\\ &=& tr(U^T\partial xV)+tr(-U^T\partial U\Sigma - \Sigma\partial V^TV) \end{align}
You can show that $tr(-U^T\partial U\Sigma - \Sigma\partial V^TV)=0$ (Hint: diagonal and antisymmetric matrices, proof in the comments.), therefore:
$$tr(\partial\Sigma) = tr(U^T\partial xV)$$
By substitution into (I):
$$\frac{\partial \|x\|_*}{\partial x}= \frac{ tr(\partial\Sigma)}{\partial x} =\frac{ tr(U^T\partial xV)}{\partial x}=\frac{ tr(VU^T\partial x)}{\partial x}=(VU^T)^T$$
Therefore you can use $U V^T$ as the subgradient.
Of course, $n:x\in M_{n,p}\rightarrow tr(\sqrt{x^Tx})$ can be derived in $x$ s.t. $x^Tx$ is invertible, that is, in the generic case when $n\geq p$ (if $n\leq p$, then consider $tr(\sqrt{xx^T})$). The result of greg is correct ; yet, his proof is unclear and I rewrite it for convenience.
If $A$ is symmetric $>0$, then $f:A\rightarrow \sqrt{A}$ is a matrix function (cf. the Higham's book about this subject) ; if $g$ is a matrix function and $\phi:A\rightarrow tr(g(A))$, then its derivative is $D\phi_A:K\rightarrow tr(g'(A)K)$. Let $A=x^Tx$. Thus $Dn_x:H\rightarrow tr(f'(A)(H^Tx+x^TH))=tr((f'(A)^Tx^T+f'(A)x^T)H)$. Then the gradient of $n$ is $\nabla(n)(x)=x(f'(A)+f'(A)^T)=2xf'(A)=x(x^Tx)^{-1/2}$.
As Alt did, we can use the SVD decomposition and we find $\nabla(n)(x)=U\Sigma (\Sigma^T\Sigma)^{-1/2}V^T$ ($=UV^T$ if $n=p$). Recall to Alt that the diagonal of $\Sigma$ is $\geq 0$.