Gram Matrices Rank

I think there are two good ways to see this and so I will give two proofs

1) If we are given two matrices $A$ and $B$, then the columns of $AB$ are linear combinations of the columns of $A$ and the rows of $AB$ are the linear combinations of the rows of $B$. This follows immediately from block multiplication $$AB = \begin{pmatrix} A\mathbf{b_1} & \cdots & A\mathbf{b_n}\end{pmatrix} = \begin{pmatrix} \mathbf{a_1}^\mathrm{T}B \\ \vdots \\ \mathbf{a_m}^\mathrm{T}B\end{pmatrix}$$ where $\mathbf{a_i}$ and $\mathbf{b_i}$ denote the row/column vectors of $A$ and $B$ respectively. From this, we can see that the columns of $A^\mathrm{T}A$ are a linear combination of the columns of $A^\mathrm{T}$, i.e. the rows of $A$. Likewise, the columns of $AA^\mathrm{T}$ are a linear combination of the columns of $A$. The row and column rank of a matrix coincide, and from this we can immediately conclude that that the ranks of the two matrices are the same.

2) For the second solution, we use a very general and useful trick for showing that a vector is $\mathbf{0}$. Notice that $\mathbf{x} = \mathbf{0} \iff \|\mathbf{x}\| = 0$ and $\mathbf{x^Tx}=\mathbf{||x||^2}$. We exploit these facts.

Lemma: $\ker(A) = \ker(A^\mathrm{T}A)$

Proof: The forward inclusion is easy. If we have $$A\mathbf{x} = \mathbf{0}$$ then we immediately have $$A^\mathrm{T}A\mathbf{x} = \mathbf{0}$$ so that we have $\ker(A) \subseteq \ker(A^\mathrm{T}A)$. For the backwards inclusion, suppose that we have $$A^\mathrm{T}A\mathbf{x} = \mathbf{0}$$ We pre-multiply by $\mathbf{x}^\mathrm{T}$ to get $$x^\mathrm{T}A^\mathrm{T}A\mathbf{x} = (A\mathbf{x})^T\cdot(A\mathbf{x}) = \|A\mathbf{x}\|^2 = 0$$ It must follow that $A\mathbf{x} = \mathbf{0}$. Therefore we also have $\ker(A^\mathrm{T}A) \subseteq \ker(A)$ so that the lemma follows. $\square$

From this, we have $\mathrm{nullity}(A) = \mathrm{nullity}(A^\mathrm{T}A)$ where from the rank-nullity theorem we have $\mathrm{rank}(A) = \mathrm{rank}(A^\mathrm{T}A)$. Applying the same result to $A^\mathrm{T}$ will give you $\mathrm{rank}(A^\mathrm{T}) = \mathrm{rank}(A^\mathrm{T}A)$. The final result follows by noting $\mathrm{rank}(A) = \mathrm{rank}(A^\mathrm{T})$.

If anything is unclear, or if you would like to know the logic behind any of the steps, please do not hesitate to ask me.


Excellent answers from @Euyu !!. I would like to add another way of proof. Assume $A=U\Sigma V^{T}$ is the Singular Value Decomposition (SVD). So $A^{T}A=V\Sigma ^{2}V^{T}$ and $AA^{T}=U\Sigma ^{2}U^{T}$ (follows from substituting the SVD). So clearly, the eigen values of the symmetric matrices $AA^{T}$ and $A^{T}A$ are the squares of singular values of $A$. Hence they have same number of non-zero eigen values and hence their rank should be the same.