How is the null space related to singular value decomposition?
Summary
Computing the full form of the singular value decomposition (SVD) will generate a set of orthonormal basis vectors for the null spaces $\color{red}{\mathcal{N} \left( \mathbf{A} \right)}$ and $\color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)}$.
Fundamental Theorem of Linear Algebra
A matrix $\mathbf{A} \in \mathbb{C}^{m\times n}_{\rho}$ induces four fundamental subspaces. These are range and null spaces for both the column and the row spaces. $$ \begin{align} % \mathbf{C}^{n} = \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} \oplus \color{red}{\mathcal{N} \left( \mathbf{A} \right)} \\ % \mathbf{C}^{m} = \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} \oplus \color{red} {\mathcal{N} \left( \mathbf{A}^{*} \right)} % \end{align} $$
The singular value decomposition provides an orthonormal basis for the four fundamental subspaces.
Singular Value Decomposition
Every nonzero matrix can be expressed as the matrix product $$ \begin{align} \mathbf{A} &= \mathbf{U} \, \Sigma \, \mathbf{V}^{*} \\ % &= % U \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{cccc|cc} \sigma_{1} & 0 & \dots & & & \dots & 0 \\ 0 & \sigma_{2} \\ \vdots && \ddots \\ & & & \sigma_{\rho} \\\hline & & & & 0 & \\ \vdots &&&&&\ddots \\ 0 & & & & & & 0 \\ \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ \color{red}{\mathbf{V}_{\mathcal{N}}}^{*} \end{array} \right] \\ % & = % U \left[ \begin{array}{cccccccc} \color{blue}{u_{1}} & \dots & \color{blue}{u_{\rho}} & \color{red}{u_{\rho+1}} & \dots & \color{red}{u_{m}} \end{array} \right] % Sigma \left[ \begin{array}{cc} \mathbf{S}_{\rho\times \rho} & \mathbf{0} \\ \mathbf{0} & \mathbf{0} \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{v_{1}^{*}} \\ \vdots \\ \color{blue}{v_{\rho}^{*}} \\ \color{red}{v_{\rho+1}^{*}} \\ \vdots \\ \color{red}{v_{n}^{*}} \end{array} \right] % \end{align} $$
The column vectors of $\mathbf{U}$ are an orthonormal span of $\mathbb{C}^{m}$ (column space), while the column vectors of $\mathbf{V}$ are an orthonormal span of $\mathbb{C}^{n}$ (row space).
The $\rho$ singular values are real and ordered (descending): $$ \sigma_{1} \ge \sigma_{2} \ge \dots \ge \sigma_{\rho}>0. $$ These singular values for the diagonal matrix of singular values $$ \mathbf{S} = \text{diagonal} (\sigma_{1},\sigma_{1},\dots,\sigma_{\rho}) \in\mathbb{R}^{\rho\times\rho}. $$ The $\mathbf{S}$ matrix is embedded in the sabot matrix $\Sigma\in\mathbb{R}^{m\times n}$ whose shape insures conformability.
Please note that the singular values only correspond to $\color{blue}{range}$ space vectors.
The column vectors form spans for the subspaces: $$ \begin{align} % R A \color{blue}{\mathcal{R} \left( \mathbf{A} \right)} &= \text{span} \left\{ \color{blue}{u_{1}}, \dots , \color{blue}{u_{\rho}} \right\} \\ % R A* \color{blue}{\mathcal{R} \left( \mathbf{A}^{*} \right)} &= \text{span} \left\{ \color{blue}{v_{1}}, \dots , \color{blue}{v_{\rho}} \right\} \\ % N A* \color{red}{\mathcal{N} \left( \mathbf{A}^{*} \right)} &= \text{span} \left\{ \color{red}{u_{\rho+1}}, \dots , \color{red}{u_{m}} \right\} \\ % N A \color{red}{\mathcal{N} \left( \mathbf{A} \right)} &= \text{span} \left\{ \color{red}{v_{\rho+1}}, \dots , \color{red}{v_{n}} \right\} \\ % \end{align} $$
The conclusion is that the full SVD provides an orthonormal span for not only the two null spaces, but also both range spaces.
Example
Since there is some misunderstanding in the original question, let's show the rough outlines of constructing the SVD.
From your data, we have $2$ singular values. Therefore the rank $\rho = 2$. From this, we know the form of the SVD: $$ \mathbf{A} = % U \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{c} \mathbf{S} \\ \mathbf{0} \\ \end{array} \right] % V \left[ \begin{array}{c} \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \end{array} \right] $$ That is, the null space $\color{red}{\mathcal{N} \left( \mathbf{A} \right)}$ is trivial.
Construct the matrix $\Sigma$:
Form the product matrix, and compute the eigenvalue spectrum $$ \lambda \left( \mathbf{A}^{*} \mathbf{A} \right) = \lambda \left( \left[ \begin{array}{cc} 7 & 6 \\ 6 & 15 \\ \end{array} \right] \right) = \left\{ 11 + 2 \sqrt{13},11-2 \sqrt{13} \right\} $$ The singular values are the square roots of the ordered eigenvalues: $$ \sigma_{k} = \lambda_{k},\qquad k = 1, \rho $$ Construct the diagonal matrix of singular values $\mathbf{S}$ and embed this into the sabot matrix $\Sigma$: $$ \mathbf{S} = \left[ \begin{array}{cc} \sqrt{11 + 2 \sqrt{13}} & 0 \\ 0 & \sqrt{11-2 \sqrt{13}} \end{array} \right], \qquad % \Sigma = \left[ \begin{array}{c} \mathbf{S} \\ \mathbf{0} \end{array} \right] = \left[ \begin{array}{cc} \sqrt{11+2 \sqrt{13}} & 0 \\ 0 & \sqrt{11-2 \sqrt{13}} \\\hline 0 & 0 \\ 0 & 0 \\ \end{array} \right] % $$
Construct the matrix $\mathbf{V}$:
Solve for the eigenvectors of the product matrix $\mathbf{A}^{*} \mathbf{A}$. They are $$ v_{1} = \color{blue}{\left[ \begin{array}{c} \frac{1}{3} \left(-2+\sqrt{13} \right) \\ 1 \end{array} \right]}, \qquad v_{2}= \color{blue}{\left[ \begin{array}{c} \frac{1}{3} \left(-2-\sqrt{13} \right) \\ 1 \end{array} \right]} $$
The normalized form of these vectors will form the columns of $\color{blue}{\mathbf{V}_{\mathcal{R}}}$
$$ \color{blue}{\mathbf{V}_{\mathcal{R}}} = \left[ \begin{array}{cc} % v1 \frac{3}{\sqrt{26-4 \sqrt{13}}} \color{blue}{\left[ \begin{array}{c} \frac{1}{3} \left(-2+\sqrt{13} \right) \\ 1 \end{array} \right]} % v2 \frac{3}{\sqrt{26+4 \sqrt{13}}} \color{blue}{\left[ \begin{array}{c} \frac{1}{3} \left(-2-\sqrt{13} \right) \\ 1 \end{array} \right]} \end{array} % \right] $$ Because the null space $\color{red}{\mathcal{N} \left( \mathbf{A} \right)}$ is trivial, $$ \mathbf{V} = \color{blue}{\mathbf{V}_{\mathcal{R}}} $$
Construct the matrix $\mathbf{U}$:
The thin SVD is $$ \begin{align} \mathbf{A} &= % U \color{blue}{\mathbf{U}_{\mathcal{R}}} % Sigma \mathbf{S} \, % V \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \end{align} $$ which can be solved as $$ \begin{align} \color{blue}{\mathbf{U}_{\mathcal{R}}} &= \mathbf{A} \color{blue}{\mathbf{V}_{\mathcal{R}}} \mathbf{S}^{-1} \\ %% &= \left[ \begin{array}{cc} \frac{1}{\sqrt{182 + 8\sqrt{13}}} \color{blue}{\left[ \begin{array}{r} 7 + \sqrt{13} \\ 4 + \sqrt{13} \\ -5 + \sqrt{13} \\ -1 + 2 \sqrt{13} \\ \end{array} \right] } & % \frac{1}{\sqrt{182 - 8\sqrt{13}}} \color{blue}{\left[ \begin{array}{r} 7 - \sqrt{13} \\ 4 - \sqrt{13} \\ -5 - \sqrt{13} \\ -1 - 2 \sqrt{13} \\ \end{array} \right] } \end{array} \right] %% \end{align} $$
The thin SVD is now complete. If you insist upon the full form of the SVD, we can compute the two missing null space vectors in $\mathbf{U}$ using the Gram-Schmidt process. One such result is $$ \mathbf{U} = \left[ \begin{array}{cc} \frac{1}{\sqrt{182 + 8\sqrt{13}}} \color{blue}{\left[ \begin{array}{r} 7 + \sqrt{13} \\ 4 + \sqrt{13} \\ -5 + \sqrt{13} \\ -1 + 2 \sqrt{13} \\ \end{array} \right] } & % \frac{1}{\sqrt{182 - 8\sqrt{13}}} \color{blue}{\left[ \begin{array}{r} 7 - \sqrt{13} \\ 4 - \sqrt{13} \\ -5 - \sqrt{13} \\ -1 - 2 \sqrt{13} \\ \end{array} \right] } & \frac{1}{\sqrt{26}} \color{red}{\left[ \begin{array}{r} 3 \\ -4 \\ 1 \\ 0 \\ \end{array} \right] } & % \frac{1}{\sqrt{35}} \color{red}{\left[ \begin{array}{r} 3 \\ -5 \\ 0 \\ 1 \\ \end{array} \right] } % \end{array} \right] $$
Conclusion
The singular values only interact with the first two range space vectors.
$$ \begin{align} \mathbf{A} &= % U \left[ \begin{array}{cc} \color{blue}{\mathbf{U}_{\mathcal{R}}} & \color{red}{\mathbf{U}_{\mathcal{N}}} \end{array} \right] % Sigma \left[ \begin{array}{c} \mathbf{S} \\ \mathbf{0} \\ \end{array} \right] % V \color{blue}{\mathbf{V}_{\mathcal{R}}}^{*} \\ &= % U \left[ \begin{array}{cccc} \color{blue}{\star} & \color{blue}{\star} & \color{red}{\star} & \color{red}{\star} \\ \color{blue}{\star} & \color{blue}{\star} & \color{red}{\star} & \color{red}{\star} \\ \color{blue}{\star} & \color{blue}{\star} & \color{red}{\star} & \color{red}{\star} \\ \color{blue}{\star} & \color{blue}{\star} & \color{red}{\star} & \color{red}{\star} \\ \end{array} \right] % S \left[ \begin{array}{cc} \sqrt{11+2 \sqrt{13}} & 0 \\ 0 & \sqrt{11-2 \sqrt{13}} \\\hline 0 & 0 \\ 0 & 0 \\ \end{array} \right] % V \left[ \begin{array}{cc} \color{blue}{\star} & \color{blue}{\star} \\ \color{blue}{\star} & \color{blue}{\star} \\ \end{array} \right] \end{align} $$
For an $m \times n$ matrix, where $m >= n$, the "full" SVD is given by $$ A = U\Sigma V^t $$ where $U$ is an $m \times m$ matrix, $\Sigma$ is an $m \times n$ matrix and $V$ is an $n \times n$ matrix. You have calculated the "economical" version of the SVD where $U$ is an $m \times n$ and $S$ is $n \times n$. Thus, you have missed the information about the left null space given by the "full" matrix $U$. The full SVD is given by $$ U = \left[ \begin{array}{cc} -0.7304 & -0.2743 & -0.1764 & -0.6001 \\ -0.5238 & -0.0319 & 0.7303 & 0.4373\\ 0.0960 & 0.6954 & 0.4363 & -0.5629 \\ -0.4277 & 0.6635 & -0.4951 & 0.3629 \end{array} \right], $$
$$ \Sigma = \left[ \begin{array}{cc} 4.2674 & 0 \\ 0 & 1.9465 \\ 0 & 0 \\ 0 & 0 \end{array} \right], $$
$$ V = \left[ \begin{array}{cc} -0.4719 & 0.8817 \\ -0.8817 & -0.4719 \end{array} \right]. $$ If you need the null spaces then you should use the "full" SVD. However, most problems do not require the "full" SVD.
[The OP is answered, but I'd like to include a quick, to the point, reminder of sorts... or an observation on symmetries.]
1. Left null space:
$A= \begin{bmatrix} 1&3\\ 1&2\\ 1&-1\\ 2&1\\ \end{bmatrix}$
In R statistical language,
A = matrix(c(1,1,1,2,3,2,-1,1), ncol = 2)
r = qr(A)$rank # Rank 2
SVD.A = svd(A, nu = nrow(A))
SVD.A$u # Extracting the matrix U from it...
$U=\begin{bmatrix}-0.73038560& -0.27428549& \color{blue}{-0.1764270}& \color{blue}{-0.6001482}\\ -0.52378089& -0.03187309& \color{blue}{0.7303387}& \color{blue}{0.4373135}\\ 0.09603322& 0.69536411& \color{blue}{0.4362937}& \color{blue}{-0.5629335}\\ -0.42774767& 0.66349102& \color{blue}{-0.4951027}& \color{blue}{0.3628841} \end{bmatrix}$
t.U.A = t(SVD.A$u)
(left_null = t.U.A[(r + 1):nrow(t.U.A),])
[,1] [,2] [,3] [,4]
[1,] -0.1764270 0.7303387 0.4362937 -0.4951027
[2,] -0.6001482 0.4373135 -0.5629335 0.3628841
colSums(left_null) %*% A
Therefore,
$\left[\alpha\begin{bmatrix}-0.1764270\\ 0.7303387\\ 0.4362937\\ -0.4951027\end{bmatrix}^\top +\beta\begin{bmatrix}-0.6001482\\ 0.4373135\\ -0.5629335\\ 0.3628841\end{bmatrix}^\top\right]\; \begin{bmatrix} 1&3\\ 1&2\\ 1&-1\\ 2&1\\ \end{bmatrix} = \mathbf 0$
with $\alpha$ and $\beta$ being scalars.
2. Right null space:
Defining matrix $B$ as the transpose of $A$,
$B= \begin{bmatrix}1&1&1&2\\3&2&-1&1\end{bmatrix}$
B = t(A)
r = qr(B)$rank # Naturally it will also have rank 2.
SVD.B = svd(B, nv = ncol(B))
SVD.B$v # Extracting the matrix V from it...
$V = \begin{bmatrix}-0.73038560& -0.27428549& \color{blue}{-0.1764270}& \color{blue}{-0.6001482}\\ -0.52378089& -0.03187309& \color{blue}{0.7303387}& \color{blue}{0.4373135}\\ 0.09603322& 0.69536411& \color{blue}{0.4362937}& \color{blue}{-0.5629335}\\ -0.42774767& 0.66349102& \color{blue}{-0.4951027}& \color{blue}{0.3628841} \end{bmatrix}$
(right_null = SVD.B$v[ ,(r + 1):ncol(B)])
[,1] [,2]
[1,] -0.1764270 -0.6001482
[2,] 0.7303387 0.4373135
[3,] 0.4362937 -0.5629335
[4,] -0.4951027 0.3628841
B %*% rowSums(right_null)
Therefore,
$\begin{bmatrix}1&1&1&2\\3&2&-1&1\end{bmatrix}\;\left[\alpha\begin{bmatrix}-0.1764270\\ 0.7303387\\ 0.4362937\\ -0.4951027\end{bmatrix} +\beta\begin{bmatrix}-0.6001482\\ 0.4373135\\ -0.5629335\\ 0.3628841\end{bmatrix}\right].$
In Matlab:
% Left null:
A = [1 3; 1 2; 1 -1; 2 1];
rank(A);
[U,S,V] = svd(A);
left_null_A = transpose(U);
rows = (rank(A) + 1): size(left_null_A,1);
left_null_A = left_null_A(rows,:)
(left_null_A(1,:) + left_null_A(2,:)) * A
% Right null:
B = transpose(A);
rank(B);
[U,S,V] = svd(B);
right_null_B = transpose(V);
rows = (rank(B) + 1): size(right_null_B,1);
right_null_B(rows,:)
right_null_B = transpose(right_null_B(rows,:))
B * (right_null_B(:,1) + right_null_B(:,2))
---
In Python:
Python:
# Left null:
import numpy as np
A = np.matrix([[1,3], [1,2], [1, -1], [2,1]])
rank = np.linalg.matrix_rank(A)
U, s, V = np.linalg.svd(A, full_matrices = True)
t_U_A = np.transpose(U)
nrow = t_U_A.shape[0]
left_null_A = t_U_A[rank:nrow,:]
left_null_A
np.dot((left_null_A[0,:] + left_null_A[0,:]), A)
# Right null:
B = np.transpose(A)
rank = np.linalg.matrix_rank(B)
U, s, V = np.linalg.svd(B, full_matrices = True)
t_V_B = np.transpose(V)
ncols = t_V_B.shape[1]
right_null_B = t_V_B[:,rank:ncols]
right_null_B
np.dot(B, (right_null_B[:,0] + right_null_B[:,1]))