Looking for an intuitive explanation why the row rank is equal to the column rank for a matrix

You can apply elementary row operations and elementary column operations to bring a matrix $A$ to a matrix that is in both row reduced echelon form and column reduced echelon form. In other words, there exist invertible matrices $P$ and $Q$ (which are products of elementary matrices) such that $$PAQ=E:=\begin{pmatrix}I_k\\&0_{(n-k)\times(n-k)}\end{pmatrix}.$$ As $P$ and $Q$ are invertible, the maximum number of linearly independent rows in $A$ is equal to the maximum number of linearly independent rows in $E$. That is, the row rank of $A$ is equal to the row rank of $E$. Similarly for the column ranks. Now it is evident that the row rank and column rank of $E$ are identical (to $k$). Hence the same holds for $A$.


This post is quite old, so my answer might come a bit late. If you are looking for an intuition (you want to "get it") rather than a demonstration (of which there are several), then here is my 5c.

If you think of a matrix A in the context of solving a system of simultaneous equations, then the row-rank of the matrix is the number of independent equations, and the column-rank of the matrix is the number of independent parameters that you can estimate from the equation. That I think makes it a bit easier to see why they should be equal.

Saad.


One way to view the rank $r$ of a $n\times m$ matrix $A$ with entries in a field $K$, is that it is the smallest number such that one can factor the linear map $f_A:K^m\to K^n$ corresponding to $A$ through an intermediate space of dimension$~r$, in other words, as a composition $K^m\to K^r\to K^n$ (taking $C$ and $B$ as the matrices corresponding to the two steps, this means that one has a decomposition $A=BC$ of $A$ as the product of a $n\times r$ and a $r\times m$ matrix). Now one can always factor $f_A$ though the image $\operatorname{Im}f_A\subseteq K^n$, as $K^m\to\operatorname{Im}f_A\hookrightarrow K^n$, and on the other hand this image can never have a dimension larger than a space through which $f_A$ factors; therefore the rank is equal to $\dim\operatorname{Im}f_A$. But that dimension is equal to the maximal number of independent columns of $A$, its column rank.

On the other hand one can view the rows of $A$ as linear functions on $K^m$ that describe the coordinates of $f_A(x)$ as a function of $x$, and the row rank $s$ is the maximum number of independent such functions; once such an independent set of $s$ independent rows is chosen, the remaining coordinates of $f_A(x)$ can each be described by a fixed linear combination of the chosen coordinates (because their rows are such linear combinations of the chosen rows). But this means that one can factor $f_A$ through $K^s$, with the map $K^s\to K^n$ reconstructing the dependent coordinates. The chosen coordinates are independent, so there is no nontrivial relation between them, and the map $K^m\to K^s$ is therefore surjective. This means that $f_A$ cannot factor through a space of smaller dimension than $s$, so the row rank $s$ is also equal to the rank of $A$.

Instead of that separate argument involving the row rank, you can also interpret the row rank of $A$ as the column rank of the transpose matrix $A^t$. Now one can factor $A=BC$ if and only if one can factor $A^t=C^tB^t$; then the minimal $r$ such that one write $A=BC$ with $B\in M_{n,r}$ and $C\in M_{r,m}$ (the column rank of $A$) obviously equals the minimal $r$ such that one write $A^t=C^tB^t$ with $C^t\in M_{m,r}$ and $B^t\in M_{r,n}$ (the column rank of $A^t$, and row rank of $A$).