Alternative definition of the determinant of a square matrix and its advantages?

Most properties of determinants follow more or less immediately when you use the following definition.

If $f:V\to V$ is an endomorphism of a vector space of finite dimension $n$ and $\Lambda^nV$ is the $n$th exterior power of $V$, then there is an induced morphism $\Lambda^n(f):\Lambda^nV\to\Lambda^nV$. Now $\Lambda^nV$ is a one dimensional vector space, so there is a canonical isomorphism of $k$-algebras $\operatorname{End}(\Lambda^nV)\cong k$. The image of the map $\Lambda^n(f)\in\operatorname{End}(\Lambda^nV)$ in $k$ under this isomorphism is the determinant of $f$.

If one wants to define the determinant of a matrix $A\in M_n(k)$, then one considers the corresponding map $k^n\to k^n$ determined by $A$, and proceeds as above.

Of course, in this approach one has to prove the properties of exterior powers and maps induced on them—but this is neither conceptually nor practically complicated.


You can define the determinant to be the signed sum of weights of non-intersecting paths in a graph between "source" vertexes $a_1,\ldots,a_n$ and "sink" vertices $b_1,\ldots,b_n$. See Qiaochu's blog post. The $(i,j)$ entries of the matrix are the sums of weights of all paths from $a_i$ to $b_j$.

This definition is nice because it's entirely combinatorial, and it makes clear why determinants should be relevant to discrete mathematics and to physics. And it genuinely provides new insights relative to the "algebraic" definitions, I think. It also provides a nice avenue to prove (and to properly understand) various standard results such as the Cauchy--Binet formula, Dodgson's condensation formula, the Plucker relations, Laplace's expansion and Dodgson's condensation formula. See this paper by Fulmek for details.


I would classify your first one as merely a method of calculating the determinant. I would call the second (Strang's) merely a property, albeit useful, of determinants. But I think that there is one sort of 'lower-level definition' that is better than the others: the determinant is the only function on a square matrix satisfying certain properties (a note on this).

In particular, if a function is linear in each row of a matrix, is 0 if two rows are the same, and is 1 if the matrix is the identity matrix, then it is the determinant function.

NOTE - Mariano has just uploaded an even better answer, but it's more basic (not in the elementary way, but rather in the It Takes Much More Math Way, so I keep my answer as is).