Can you use row and column operations interchangeably?

What "reducing a matrix" means

One issue here is what is meant by "reducing a matrix". The only purpose I can see for reducing a matrix is to decide if the matrix has some specific property. Whatever property you are interested in has to remain unchanged under the operations you do or you have not succeeded in your purpose.

If you investigate Serge Lang's book closely, you will find that he had a specific purpose for reducing the matrix,(to find the rank, discussed below) which allows him to use column operations to achieve his aim.

It is important to note that for most people, the phrase "reducing a matrix" refers specifically to finding the Reduced Row Echelon Form (also known as RREF). As the name implies, RREF is defined using the rows of the matrix: 1. The leftmost nonzero entry in any row is a 1 (called a "leading 1"). 2. Leading 1's on lower rows are further to the right than leading 1's on higher rows. 3. Other entries in a column with a leading 1 are zero. 4. All rows consisting entirely of zeros are at the bottom of the matrix. Since you are defining this matrix in terms of rows, you must do row operations to achieve it. In fact, if you do use row operations, then given a particular starting matrix (which may or may not be square, by the way), there is precisely one RREF that it will reduce to. If you do column operations you will not arrive at this same matrix.

Relationships to solutions of equations

The point others have made about finding solutions is valid. Every matrix can be used to represent the coefficients of a system of linear equations, regardless of whether you want it to or not. Many of the important properties of matrices are strongly related to this system of linear equations, and so it is a good idea to have methods that preserve these properties.

For example, the set of solutions to $A\mathbf{x} = \mathbf{0}$ is the same as the set of solutions to the (homogeneous) linear equations, and the nonzero rows of the RREF form a basis for the space perpendicular to this space of solutions. Neither of these properties are preserved by column operations, but they are preserved by row operations.

Most importantly, just to reiterate: the solution to your homogeneous linear equations is definitely not preserved by column operations, and this is an important consideration regardless of whether you want your matrix to represent linear equations or not.

Note that there are things that column operations do preserve, such as the range of the linear transformation defined by your matrix. However a mixture of row and column operations will not preserve this, nor most of the properties you care about.

The inverse of a square matrix

One property that a mixture of row and column operations does preserve is invertibility. You can see this from the idea of elementary matrices.

Doing elementary row operations corresponds to multiplying on the left by an elementary matrix. For example, the row operation of "new R2 = R2 - 3R1" is produced on a 3 by n matrix when you multiply on the left by $\begin{pmatrix} 1 & 0 & 0 \\ -3 & 1 & 0\\ 0 & 0 & 1 \end{pmatrix}$. Column operations, on the other hand, are produced when you multiply by a matrix on the right hand side. The column operation of "new C2 = C2 - 3C1" is produced on an m by 3 matrix when you multiply on the right by $\begin{pmatrix} 1 & -3 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{pmatrix}$.

The process of finding the inverse of a square matrix by augmenting an identity and doing matching row operations on both works because of these elementary matrices. If you perform row operations on $A$ to get $I$ and the elementary matrices corresponding to these operations are $E_1$, ..., $E_k$, then we have $E_k ... E_1 A = I$ and so the inverse of $A$ must be $E_k ... E_1$. But this is equal to $E_k ... E_1 I$, which is what you get when you do all those row operations on the identity!

This would work perfectly well if you did column operations, however a mixture of the two is a bit harder to understand. Suppose you did row operations with matrices $E_1$, ..., $E_k$ and column operations with matrices $F_1$, ..., $F_h$ and you produced the identity as a result. Then $$ \begin{align} E_k \dots E_1 A F_1 \dots F_h &= I\\ A &= (E_1)^{-1} \dots (E_k)^{-1} I (F_h)^{-1} \dots (F_1)^{-1}\\ A^{-1} &= F_1 \dots F_h I E_k \dots E_1 \end{align} $$

This is not the result of doing the matching column operations and row operations on $I$ because the matrices are on the wrong side!

So if you can row-and-or-column reduce a matrix to the identity, then this proves that the matrix is invertible, but it is tricky to tell what the inverse actually is unless you do only row or only column operations.

Finally note that row and column operations simultaneously will allow you to calculate the determinant of a square matrix easily, and so there is a certain advantage to doing this. In particular a row/column operation of the type "new Ri = Ri + k Rj" or "new Ci = Ci + k Cj" will not change the determinant, so if you restrict yourself to those operations, you can get your matrix into a form where it is clear what the determinant is more quickly than restricting yourself to just one. However, it is worth stressing that this relates to my original comment about the purpose you are trying to achieve -- this is only really an advantage in the situation where you are finding determinants.

The rank of a matrix

Finally, this row-and-or-column reduction also does preserve the rank of your matrix. If you do this kind of reduction on any matrix, what you are guaranteed to produce eventually if you try is a matrix with some 1's down its main diagonal (but not necessarily all the way down) and zeros everywhere else. Something like these: $$ \begin{pmatrix} 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0& 0\end{pmatrix} \quad \begin{pmatrix} 1 & 0 & 0 & 0 \\ 0& 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{pmatrix} $$

The number of 1's on that diagonal is the rank of your matrix. That is, it is the dimension of the column space and row space, and the dimension of the range of the linear transformation defined by your matrix, and n minus the dimension of the solution space to your system of equations. So the two matrices above have rank 3 and 2 respectively.

However, there is nothing in the final reduced form that tells you what the actual solutions to the equations are or what the bases for the spaces are. These can all be changed in ways that are not easy to follow if you do both row and column operations. If you did just column operations, then the leading-1 columns would be a basis for the range of the linear transformation, but with both row and column operations, these columns are all standard basis vectors and won't necessarily be correct. If you did just row operations, then the non-leading-1 columns would appear as the first several coordinates of the vectors in a basis for the null space of the matrix, but with both row and column operations these columns are all zero.

Summary

  1. Most people mean Reduced Row Echelon Form when they say "reduced matrix", which is defined by doing only row operations.
  2. Row operations preserve many useful properties of matrices and tell you certain information about matrices.
  3. Column operations preserve some useful properties of matrices and tell you certain information about matrices.
  4. Mixtures of row and column operations preserve only a small number of properties. In particular, a mixture will preserve the rank and the invertibility of a matrix (but will not provide bases for associated subspaces or the actual inverse itself).

Say you have a system of equations:

$$\begin{cases} a_{11} x_1+\ldots +a_{1m}x_m =b_1\\ \vdots \\ a_{n1}x_1+\ldots + a_{nm}x_m = b_n \end{cases}$$

Then the corresponding augmented matrix is

\begin{pmatrix} a_{11} & \ldots & a_{1m} & | & b_1 \\ \vdots & \ddots & \vdots & | & \vdots \\ a_{n1} & \ldots & a_{nm} & | & b_n\end{pmatrix}

Row operations correspond to adding and scaling equations or reordering them, which--by the force of regular algebra--doesn't change the overall system. Column operations however, DO change the system, because if we scale say the last column with all the $b$s, we see that this changes what we're trying to solve for.

In particular: say your first set of equations is just:

$$\begin{cases} x + y = 1 \\ x-y = 0\end{cases}$$

then the matrix is

$$\begin{pmatrix} 1 & 1 & | & 1 \\ 1 & -1 & | & 0\end{pmatrix}$$

It's easy to see that the solution is $x=y=1/2$, however, if we can scale the columns as well, we get $x=y=1$ just by scaling the last column by a factor of $2$, and if we could add columns, we would get the matrix:

$$\begin{pmatrix} 1 & 2 & | & 1 \\ 1 & -1 & | & 0\end{pmatrix}$$

by adding the last column to the second one. This gives us the system:

$$\begin{cases} x + 2y = 1 \\ x-y=0\end{cases}$$

where we see that $x=y=1/3$, a different solution.


When you perform column operations rather than row operations, you change the solution rather than the right hand side. For brevity, let's say we are solving $Ax=b$; denote the new solution after a certain column operation by $x'$.

  • If you multiply column 1 by 2, $x'_1 = x_1/2$.
  • If you interchange columns 1 and 2, $x'_1=x_2$, $x'_2=x_1$.
  • If you add column 1 to column 2, $x'_1 = x_1 - x_2$. (Check this, I only tried this on a $2 \times 2$ example.)

These problems aside, yes, you can use both column operations and row operations in a Gaussian elimination procedure. There is fairly little practical use for doing so, however. About the only use is performing pivoting using both row and column exchanges (so-called "total pivoting", rather than "partial pivoting"), but one finds that in practice row exchanges alone are sufficient.

By the way, total pivoting is quite slow, because it requires scanning the entire matrix, rather than just one column, when finding each pivot. This increases the running time by a constant factor, whereas partial pivoting adds only quadratic time to a cubic procedure.