How do I prove that $\det A= \det A^t$?

While your proof is basically correct, I would not consider it my favourite proof of this fact, for the following reasons:

  • It uses the multiplicativity of determinants, which is a much less elementary property than invariance under transposition.
  • It works only for matrices over a field, while the definition of the determinant, and the invariance under transposition, require no more than a commutative ring. There is a general principle that identities of this kind, if they hold over fields, must in fact be valid over commutative rings as well, but understanding that kind of argument requires an additional level of mathematical maturity; it is in most cases (like this one) much easier to just give a proof that is itself valid in the more general setting.
  • It needs to single out the non-invertible case. If you would allow for a triangular matrix (for which the result is obvious) at the end of the product of elementary operations, you would not need to single out this case (and as a bonus you would in most cases need far fewer elementary operations). See this answer to a similar question where the use of row operations was imposed (but the answer does not say what kind of matrix one is reducing to).

How I would prove this depends on the definition of the determinant has been given. If it is defined using the Leibniz formula (which in my opinion is the right definition to give, although a different motivation should first be given), then the proof just amounts to showing that every permutation has the same sign as its inverse, which is quite elementary (and can by the way be done in a similar way as your proof, but using a decomposition into transpositions).

If on the other hand one has defined the determinant as unique $n$-linear alternating function on an $n$ dimensional vector space taking value $1$ on a given ordered basis, (which still requires using Leibniz or a substitute for proving existence) then invariance under transposition is harder to see. Here one needs to pass to the dual vector space, but it seems necessary to use the fact that if $v_1,\ldots,v_n$ are vectors and $\alpha_1,\ldots,\alpha_n$ are linear forms, then the determinant $$ \begin{vmatrix}\alpha_1(v_1)&\ldots&\alpha_1(v_n)\\ \vdots&\ddots&\vdots\\\alpha_n(v_1)&\ldots&\alpha_n(v_n)\end{vmatrix} $$ is not only $n$-linear and alternating in $v_1,\ldots,v_n$ for fixed $\alpha_1,\ldots,\alpha_n$ (as is clear from the definition) but also $n$-linear and alternating in $\alpha_1,\ldots,\alpha_n$ (for fixed $v_1,\ldots,v_n$); I can see no easy argument for this, other than using the Leibniz formula. Once this is established, it is easy to show that a linear operator $f$ induces the same scalar factor on $n$-linear forms when applied to $n$-uplets of vectors as it does when applied (on the right) to $n$-uplets of linear forms, in other words its determinant is the same as that of its transpose (in essence, both cases correspond to inserting an $f$ between each $\alpha_i$ and $v_j$ in the above determinant).

This still does not provide a context in which using row operations to prove this fact would be a natural choice. It is not clear to me from the question whether Artin actually suggests this, and if so why. The only reason I can think of using such an argument is when somebody commands thou shall not use the Leibniz formula.


I believe your proof is correct.

Note that the best way of proving that $\det(A)=\det(A^t)$ depends very much on the definition of the determinant you are using. My personal favorite way of proving it is by giving a definition of the determinant such that $\det(A)=\det(A^t)$ is obviously true. One possible definition of the determinant of an $(n\times n)$-matrix is $$ \sum_{\text{pick $n$ different numbers in the} \atop \text{matrix, no two in the same row or column}} \text{sign}(\text{your choice of numbers})\cdot(\text{product of the numbers}) .$$ In this formula the tiny text under the summation-symbol reads "pick $n$ different numbers in the matrix, no two in the same row or column". The sign of such a choice of numbers is calculated by drawing a ling segment between any pair of chosen numbers and then calculating $$ (-1)^{\text{number of line segments with positive slope}} .$$ If you follow this definition, it is clear that the determinant does not change upon transposing the matrix (the only thing you have to observe is that a line segment with positive slope is transposed into one with positive slope, and a line segment with negative slope is transposed into one with negative slope).

Of course, using this definition gives disadvantages in proving other properties of the determinant (well, I suppose so).


Your proof seems fine. I know that Artin tells you to prove this with row operations, which you have done, but it could be instructive to see more reasons why this result is true. Try to fill in the details of this proof:

Induct on $n$ where $A$ is an n by n matrix. In calculating $\det A$, expand along the first column, and calculate $\det A^T$ by expanding along the first row.