Why do elementary row operations preserve linear dependence between matrix columns?

Let's start out from the standard basis $e_1,..,e_n$. Let $a_1,..,a_k$ be the column vectors of $A$.

Check that the step on rows $r_i':=r_i+\lambda\,r_j$ corresponds to the basis transformation $e_j':=e_j-\lambda\,e_i$, that is, for a vector $v$ we have $$v=\sum_i\alpha_ie_i=\sum_i\alpha_i'e_i'$$ where the row transformation is made for the coordinate vector $\pmatrix{\alpha_1\\ \alpha_2\\ \vdots} \leadsto \pmatrix{\alpha_1'\\ \alpha_2'\\ \vdots}$.

So, in this interpretation the column vectors all "stay" where they are in the $n$ dimensional space, but we keep on changing the basis. Of course, the vectors stay (in-)dependent. The crucial thing here is that we get another basis when applying (the inverse of) each step of the row transformation.


Here is a geometric way of looking at it (it's not a rigorous proof, but gives a nice way to understanding, non-algebraically, why it's true): The geometric meaning of the determinant of a matrix is that it is the algebraic volume of the parallelepiped spanned by the columns (or rows) of the matrix (algebraic volume means that the volume can be negative, depending on the orientation of the parallelepiped). By volume here we mean $n$-dimensional volume. Thus, the determinant is $0$ iff the volume is $0$ iff the columns span a degenerate parallelepiped iff the columns (rows) are dependent.

Now, row operations have clear effects on the parallelepiped. Interchanging rows just reverses the orientation. Non-zero scalar multiplication lengthens or shortens the parallelepiped in one direction. Adding a row to another one happens in a two dimensional section of the entire space. Intuitively, all of these operations can't turn a degenerate parallepiped into a nondegenerate one, or vice versa. Thus, preserving the non-vanishing of the determinant.