Are column operations legal in matrices also?

Yes we can, but the question is what properties of the matrix we preserve when we do so. The row operations (which correspond to multiplication by an invertible matrix on the left) preserve the row space (the linear span of the rows) and the null space. The column operations preserve the column space (the linear span of the columns) and the null space of the transpose, but they don't preserve the row space or the null space.


Let's say we have some relation, like

$\left( \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{array} \right) \left( \begin{array}{ccc} x \\ y \\ z\end{array}\right) = \left( \begin{array}{ccc} 10 \\ 11 \\ 12\end{array} \right)$

This has a lot of information here. For instance, it says that $x + 2y + 3z = 10$. In fact, one can uniquely solve for $x, y, z$ from this relationship. But suppose we multiply the first column by 3:

$\left( \begin{array}{ccc} 3 & 2 & 3 \\ 12 & 5 & 6 \\ 21 & 8 & 9 \end{array} \right) \left( \begin{array}{ccc} x \\ y \\ z\end{array}\right) = \left( \begin{array}{ccc} 10 \\ 11 \\ 12\end{array} \right)$

Both of these cannot be correct (you can fully solve it out if you want, in fact, I encourage it). For instance, knowing that $x + 2y + 3z = 10$ and that $3x + 2y + 3z = 10$ tells us that $x = 0$. That's not so good - clearly, it is not always the case (and it's not here, either). Further, this would result in 3 non-dependent equations in 2 variables - no solution exists.

The main idea is that matrices hide linear equations, and the way in which they hide them is dependent on the laws of matrix multiplication. It's linear with respect to row operations, because that's the same as redundant operations. But it's not the same with respect to column operations. If we wanted to change rows, then we could use matrices... if we multiply on the right instead of the left. Again, it's an artifact of the way in which matrices multiply.


We can perform elementary column operations: if you multiply a matrix on the right by an elementary matrix, you perform an "elementary column operation".

However, elementary row operations are more useful when dealing with things like systems of linear equations, or finding inverses of matricces. And anything you want to do with columns you can do with rows... in the transpose.

You can develop the entire theory with elementary column operations instead; you just have to set up everything "the other way around". You would look at systems of linear equations as systems of the form $$(b_1,b_2,\ldots,b_m) = (x_1,\ldots,x_m)A$$ (that is, $\mathbf{b} = \mathbf{x}A$), with row vectors; the coefficients of the $m$th equation would become the $m$th column of the coefficient matrix $A$. And you would perform elementary column operations instead of elementary row operations.

When you perform elementary row operations with systems of the form $A\mathbf{x}=\mathbf{b}$, your operations do not respect the columnspace, but do respect the rowspace and the nullspace. If you perform elementary column operations to these systems, you respect the columnspace, but you do not respect the rowspace of the nullspace.

That said, matrices are not "really just a grouping of column vectors". They are a lot more.