The Importance of Minors
Several determinantal identities involve the minors of a matrix.
For a first introduction, see Chapter 6 (the chapter on determinants) in my Notes on the combinatorial fundamentals of algebra, specifically Sections 6.12 (Laplace expansion), 6.15 (the adjugate matrix $\operatorname{adj}A$, whose entries are more or less minors of $A$) 6.19 (Cramer's rule, which involves minors of a rectangular matrix), 6.20 (the Desnanot-Jacobi identity, often ascribed to Lewis Carroll since he made it into a technique for computing determinants), 6.21 (the Plücker relation, in one of its forms), 6.22 (Laplace expansion in several rows/columns), 6.23 (the formula for $\det\left(A+B\right)$ as an alternating sum of minors of $A$ times complementary minors of $B$) and 6.25 (which includes the Jacobi complementary minor theorem). (Numbering of sections may change.)
My notes just scratch the surface; many deeper determinant identities are known since the 1800s. A particularly significant one is Sylvester's identity, which involves a determinant whose entries themselves are minors of a matrix. See, for example, Anna Karapiperi, Michela Redivo-Zaglia, Maria Rosaria Russo, Generalizations of Sylvester's determinantal identity, arXiv:1503.00519v1, and also Adam Berliner and Richard A. Brualdi, A combinatorial proof of the Dodgson/Muir determinantal identity.
Richard Swan's expository paper On the straightening law for minors of a matrix (arXiv:1605.06696) gives another nice identity between minors of a matrix (Theorem 2.6) and uses it to prove the so-called straightening law for letterplace algebras (these are coordinate rings of matrix spaces, i.e., polynomial rings in $mn$ indeterminates $x_{i,j}$ for $i \leq m$ and $j \leq n$). This straightening law is one of the pillars of characteristic-free invariant theory (i.e., invariant theory of classical groups over arbitrary commutative base rings), as exposed (e.g.) in Chapter 13 of Claudio Procesi's Lie Groups.
Various authors have tried to find "the most general determinantal identity"; the answer, of course, depends heavily on how one formalizes the question. Shreeram S. Abhyankar, Enumerative Combinatorics of Young tableaux, Dekker 1988 is one attempt at such an identity, I believe. For what I think is meant to be an expository introduction, see Sudhir R. Ghorpade, Abhyankar's work on Young tableaux; I admit I have read neither Abhyankar's book nor this introduction.
The irreducible representation of a symmetric group $S_n$ (over, say, $\mathbb{C}$) are the so-called Specht modules. Nowadays, they are usually defined using Young tableaux, but when they were first defined by Specht in 1935, they were constructed as spans of products of certain minors of a generic matrix (in a letterplace algebra, if you wish). See Remark 2.9 in Mark Wildon, Representation theory of the symmetric group. In a sense, this is not surprising: the column antisymmetrizer in the definition of a Young symmetrizer corresponds to the alternating sum over permutations in the definition of a determinant. This allows for translating various results from the language of Young symmetrizers into the language of identities between matrix minors and backwards. (For example, the Garnir relations in the former language correspond to the Plücker relations in the latter.) I think this is an aspect of Schur-Weyl duality.
An interesting application that has had a great deal of historical importance in mathematical economics:
Theorem: (Gale-Nikaido) If all the principal minors of the Jacobian of $F: \mathbb{R}^n \to \mathbb{R}^n$ are positive, then $F$ is injective.
Source.
The following come to mind:
- There is all the applications on Wikipedia, for starters:
- You can study the so-called cofactor matrix of a square matrix, which gives you a way to express the inverse of an invertible matrix using only the determinant and the cofactors (i.e. the $(n-1)\times(n-1)$ minors of an $n\times n$ matrix).
- Sylvester's criterion is a way to check whether a matrix is positive (semi)-definite by studying certain minors.
- The $r$-th coefficient of the characteristic polynomial of a square matrix is given by the sum of all $r\times r$ minors.
- You can use the $m\times m$ Minors of a generic $m\times n$ matrix as coordinates for the $m$-th Grassmannian of $k^n$. This is the space of all linear subspaces of dimension $m$ of $k^n$, so it only really makes sense for $m\le n$. It is a very interesting variety.
- If you start with an endomorphism of $k^n$, then it induces a map on the exterior power $\bigwedge^r k^n$ for every $r$. This induced linear operator is described by the $r\times r$ minors of the original endomorphism, see this great answer to a related question.