Jacobi's equality between complementary minors of inverse matrices

The key word under which you will find this result in modern books is "Schur complement". Here is a self-contained proof. Assume $I$ and $J$ are $(1,2,\dots,k)$ for some $k$ without loss of generality (you may reorder rows/columns). Let the matrix be $$ M=\begin{bmatrix}A & B\\\\ C & D\end{bmatrix}, $$ where the blocks $A$ and $D$ are square. Assume for now that $A$ is invertible --- you may treat the general case with a continuity argument. Let $S=D-CA^{-1}B$ be the so-called Schur complement of $A$ in $M$.

You may verify the following identity ("magic wand Schur complement formula") $$ \begin{bmatrix}A & B\\\\ C & D\end{bmatrix} = \begin{bmatrix}I & 0\\\\ CA^{-1} & I\end{bmatrix} \begin{bmatrix}A & 0\\\\ 0 & S\end{bmatrix} \begin{bmatrix}I & A^{-1}B\\\\ 0 & I\end{bmatrix}. \tag{1} $$ By taking determinants, $$\det M=\det A \det S. \tag{2}$$ Moreover, if you invert term-by-term the above formula you can see that the (2,2) block of $M^{-1}$ is $S^{-1}$. So your thesis is now (2).

Note that the "magic formula" (1) can be derived via block Gaussian elimination and is much less magic than it looks at first sight.


Not all has been said about this question that is worth saying -- at the very least, someone could have written down the version without the absolute values; but more importantly, there are various other equally good proofs.

Notations and statement

Let me first state the result with proper signs and no absolute values.

Standing assumptions. The following notations will be used throughout this post:

  • Let $\mathbb{K}$ be a commutative ring. All matrices that appear in the following are matrices over $\mathbb{K}$.

  • Let $\mathbb{N}=\left\{ 0,1,2,\ldots\right\} $.

  • For every $n\in\mathbb{N}$, we let $\left[ n\right] $ denote the set $\left\{ 1,2,\ldots,n\right\} $.

  • Fix $n\in\mathbb{N}$.

  • Let $S_n$ denote the $n$-th symmetric group (i.e., the group of permutations of $\left[ n\right] $).

  • If $A\in\mathbb{K}^{n\times m}$ is an $n\times m$-matrix, if $I$ is a subset of $\left[ n \right]$, and if $J$ is a subset of $\left[ m \right]$, then $A_J^I$ is the $\left| I\right| \times\left| J\right| $-matrix defined as follows: Write $A$ in the form $A=\left( a_{i,j}\right) _{1\leq i\leq n,\ 1\leq j\leq m}$; write the set $I$ in the form $I = \left\{ i_1 < i_2 < \cdots < i_u \right\}$; write the set $J$ in the form $J = \left\{ j_1 < j_2 < \cdots < j_v \right\}$. Then, set $A_J^I = \left( a_{i_x, j_y} \right) _{1\leq x\leq u,\ 1\leq y\leq v}$. (Thus, roughly speaking, $A_J^I$ is the $\left| I\right| \times\left| J\right| $-matrix obtained from $A$ by removing all rows whose indices do not belong to $I$, and removing all columns whose indices do not belong to $J$.)

If $K$ is a subset of $\left[ n\right] $, then:

  • we use $\widetilde{K}$ to denote the complement $\left[ n\right] \setminus K$ of this subset in $\left[ n\right] $.

  • we use $\sum K$ to denote the sum of the elements of $K$.

Now, we claim the following:

Theorem 1 (Jacobi's complementary minor formula). Let $A\in\mathbb{K} ^{n\times n}$ be an invertible $n\times n$-matrix. Let $I$ and $J$ be two subsets of $\left[ n\right] $ such that $\left| I\right| =\left| J\right| $. Then, \begin{align} \det\left( A_{J}^{I}\right) =\left( -1\right) ^{\sum I+\sum J}\det A\cdot\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J} }\right) . \end{align}

Three references

Here are three references to proofs of Theorem 1:

  • Theorem 1 is Lemma A.1 (e) in Sergio Caracciolo, Alan D. Sokal, Andrea Sportiello, Algebraic/combinatorial proofs of Cayley-type identities for derivatives of determinants and pfaffians, arXiv:1105.6270v2 (published in: Advances in Applied Mathematics 50, 474--594 (2013)). In the paragraph following Theorem A.16, a proof is given using what the authors call "Grassmann-Berezin integration" (despite its name, a purely algebraic mock-calculus on the exterior algebra of a vector space).

  • Theorem 1 is (1) in Pierre Lalonde, A non-commutative version of Jacobi's equality on the cofactors of a matrix, Discrete Mathematics 158 (1996), pp. 161--172. The goal of the paper is to generalize it to a (mildly) noncommutative setting.

  • Theorem 1 is Exercise 6.56 in my Notes on the combinatorial fundamentals of algebra, version of 10 January 2019. The first proof I give is fairly classical and similar to the ones given by Federico Poloni and Denis Serre, but requires no WLOG assumptions (instead of using the Schur complement, I use a cheap generalization of it which is Exercise 6.38 in the notes). The formal bookkeeping that leads to the sign $\left( -1\right) ^{\sum I+\sum J}$ takes a huge lot of space, although it is so easy to convince yourself of it with some handwaving that you might not notice that it requires any proof at all. The second proof is an expansion of an argument briefly outlined in D. Laksov, A. Lascoux, P. Pragacz, and A. Thorup, The LLPT Notes, old (2001) version (Chapter SCHUR, proof of (1.9)).

Note that every source uses different notations. What I call $A_J^I$ above is called $A_{IJ}$ in the paper by Caracciolo, Sokal and Sportiello, is called $A\left[ I,J\right] $ in Lalonde's paper, and is called $\operatorname*{sub}\nolimits_{w\left( I\right) }^{w\left( J\right) }A$ in my notes. Also, the $I$ and $J$ in the paper by Caracciolo, Sokal and Sportiello correspond to the $\widetilde{I}$ and $\widetilde{J}$ in Theorem 1 above.

A fourth proof

Let me now give a fourth proof, using exterior algebra. The proof is probably not new (the method is definitely not), but I find it instructive.

This proof would become a lot shorter if I didn't care for the signs and would only prove the weaker claim that $\det\left( A_J^I \right) = \pm \det A\cdot \det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J} }\right) $ for some value of $\pm$. But this weaker claim is not as useful as Theorem 1 in its full version (in particular, it would not suffice to fill the gap in Macdonald's book that has motivated this question).

The permutation $w\left( K\right) $

Let us first introduce some more notations:

If $K$ is a subset of $\left[ n\right] $, and if $k = \left|K\right|$, then we let $w\left( K\right) $ be the (unique) permutation $\sigma\in S_n$ whose first $k$ values $\sigma\left( 1\right) ,\sigma\left( 2\right) ,\ldots,\sigma\left( k\right) $ are the elements of $K$ in increasing order, and whose next $n-k$ values $\sigma\left( k+1\right) ,\sigma\left( k+2\right) ,\ldots ,\sigma\left( n\right) $ are the elements of $\widetilde{K}$ in increasing order.

The first important property of $w\left( K\right) $ is the following fact:

Lemma 2. Let $K$ be a subset of $\left[ n\right] $. Then, $\left( -1\right) ^{w\left( K\right) }=\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }$.

You don't need to prove Lemma 2 if you only care about the weaker version of Theorem 1 with the $\pm$ sign.

Proof of Lemma 2. Let $k=\left| K\right| $. Let $a_{1},a_{2} ,\ldots,a_{k}$ be the $k$ elements of $K$ in increasing order (with no repetitions). Let $b_{1},b_{2},\ldots,b_{n-k}$ be the $n-k$ elements of $\widetilde{K}$ in increasing order (with no repetitions). Let $\gamma =w\left( K\right) $. Then, the definition of $w\left( K\right) $ shows that the first $k$ values $\gamma\left( 1\right) ,\gamma\left( 2\right) ,\ldots,\gamma\left( k\right) $ of $\gamma$ are the elements of $K$ in increasing order (that is, $a_{1},a_{2},\ldots,a_{k}$), and the next $n-k$ values $\gamma\left( k+1\right) ,\gamma\left( k+2\right) ,\ldots ,\gamma\left( n\right) $ of $\gamma$ are the elements of $\widetilde{K}$ in increasing order (that is, $b_{1},b_{2},\ldots,b_{n-k}$). In other words, \begin{align} \left( \gamma\left( 1\right) ,\gamma\left( 2\right) ,\ldots ,\gamma\left( n\right) \right) =\left( a_{1},a_{2},\ldots,a_{k} ,b_{1},b_{2},\ldots,b_{n-k}\right) . \end{align}

Now, you can obtain the list $\left( \gamma\left( 1\right) ,\gamma\left( 2\right) ,\ldots,\gamma\left( n\right) \right) $ from the list $\left( 1,2,\ldots,n\right) $ by successively switching adjacent entries, as follows:

  • First, move the element $a_{1}$ to the front of the list, by successively switching it with each of the $a_{1}-1$ entries smaller than it.

  • Then, move the element $a_{2}$ to the second position, by successively switching it with each of the $a_{2}-2$ entries (other than $a_{1}$) smaller than it.

  • Then, move the element $a_{3}$ to the third position, by successively switching it with each of the $a_{3}-3$ entries (other than $a_{1}$ and $a_{2}$) smaller than it.

  • And so on, until you finally move the element $a_{k}$ to the $k$-th position.

More formally, you are iterating over all $i\in\left\{ 1,2,\ldots,k\right\} $ (in increasing order), each time moving the element $a_{i}$ to the $i$-th position in the list, by successively switching it with each of the $a_{i}-i$ entries (other than $a_{1},a_{2},\ldots,a_{i-1}$) smaller than it.

At the end, the first $k$ positions of the list are filled with $a_{1} ,a_{2},\ldots,a_{k}$ (in this order), whereas the remaining $n-k$ positions are filled with the remaining entries $b_{1},b_{2},\ldots,b_{n-k}$ (again, in this order, because the switches have never disrupted their strictly-increasing relative order). Thus, at the end, your list is precisely $\left( a_{1},a_{2},\ldots,a_{k},b_{1},b_{2},\ldots,b_{n-k}\right) =\left( \gamma\left( 1\right) ,\gamma\left( 2\right) ,\ldots,\gamma\left( n\right) \right) $. You have used a total of \begin{align} & \left( a_{1}-1\right) +\left( a_{2}-2\right) +\cdots+\left( a_{k}-k\right) \\ & = \underbrace{\left( a_{1}+a_{2}+\cdots+a_{k}\right) }_{\substack{=\sum K\\\text{(by the definition of }a_{1},a_{2},\ldots,a_{k}\text{)} }}-\underbrace{\left( 1+2+\cdots+k\right) }_{\substack{=1+2+\cdots +\left| K\right| \\\text{(since }k=\left| K\right| \text{)}}} \\ & =\sum K-\left( 1+2+\cdots+\left| K\right| \right) \end{align} switches. Thus, you have obtained the list $\left( \gamma\left( 1\right) ,\gamma\left( 2\right) ,\ldots,\gamma\left( n\right) \right) $ from the list $\left( 1,2,\ldots,n\right) $ by $\sum K-\left( 1+2+\cdots+\left| K\right| \right) $ switches of adjacent entries. In other words, the permutation $\gamma$ is a composition of $\sum K-\left( 1+2+\cdots+\left| K\right| \right) $ simple transpositions (where a "simple transposition" means a transposition switching $u$ with $u+1$ for some $u$). Hence, it has sign $\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }$. This proves Lemma 2. $\blacksquare$

Exterior algebras

Now, let's introduce some more notations and state some well-known properties concerning exterior algebras.

For any $\mathbb{K}$-module $V$, we let $\wedge V$ denote the exterior algebra of $V$. The multiplication in this exterior algebra will be written as juxtaposition (i.e., we will write $ab$ for the product of two elements $a$ and $b$ of $\wedge V$) or as multiplication (i.e., we will write $a\cdot b$ for this product).

If $k\in\mathbb{N}$ and if $V$ is a $\mathbb{K}$-module, then $\wedge^{k}V$ shall mean the $k$-th exterior power of $V$. If $k\in\mathbb{N}$, if $V$ and $W$ are two $\mathbb{K}$-modules, and if $f:V\rightarrow W$ is a $\mathbb{K} $-linear map, then the $\mathbb{K}$-linear map $\wedge^{k}V\rightarrow \wedge^{k}W$ canonically induced by $f$ will be denoted by $\wedge^{k}f$. It is well-known that if $V$ and $W$ are two $\mathbb{K}$-modules, if $f:V\rightarrow W$ is a $\mathbb{K}$-linear map, then \begin{align} \left( \wedge^{k}f\right) \left( a\right) \cdot\left( \wedge^{\ell}f\right) \left( b\right) =\left( \wedge^{k+\ell}f\right) \left( ab\right) \label{darij1.eq1} \tag{1} \end{align} for any $k\in\mathbb{N}$, $\ell\in\mathbb{N}$, $a\in\wedge^{k}V$ and $b\in\wedge^{\ell}V$.

If $V$ is a $\mathbb{K}$-module, then \begin{align} uv=\left( -1\right) ^{k\ell}vu \label{darij1.eq2} \tag{2} \end{align} for any $k\in\mathbb{N}$, $\ell\in\mathbb{N}$, $u\in\wedge^{k}V$ and $v\in\wedge^{\ell}V$.

For any $u\in\mathbb{N}$, we consider $\mathbb{K}^{u}$ as the $\mathbb{K} $-module of column vectors with $u$ entries.

For any $u\in\mathbb{N}$ and $v\in\mathbb{N}$, and any $v\times u$-matrix $B\in\mathbb{K}^{v\times u}$, we define $f_{B}$ to be the $\mathbb{K}$-linear map $\mathbb{K}^{u}\rightarrow\mathbb{K}^{v}$ sending each $x\in\mathbb{K} ^{u}$ to $Bx\in\mathbb{K}^{v}$. This $\mathbb{K}$-linear map $f_{B}$ satisfies $\det\left( f_{B}\right) =\det B$, and is often identified with the matrix $B$ (though we will not identify it with $B$ here).

Here is another known fact:

Proposition 2a. Let $f:\mathbb{K}^{n}\rightarrow\mathbb{K}^{n}$ be a $\mathbb{K}$-linear map. The map $\wedge^{n}f:\wedge^{n}\left( \mathbb{K} ^{n}\right) \rightarrow\wedge^{n}\left( \mathbb{K}^{n}\right) $ is multiplication by $\det f$. In other words, every $z\in\wedge^{n}\left( \mathbb{K}^{n}\right) $ satisfies \begin{align} \left( \wedge^{n}f\right) \left( z\right) =\left( \det f\right) z . \label{darij1.eq3} \tag{3} \end{align}

Let $\left( e_{1},e_{2},\ldots,e_{n}\right) $ be the standard basis of the $\mathbb{K}$-module $\mathbb{K}^{n}$. (Thus, $e_i$ is the column vector whose $i$-th entry is $1$ and whose all other entries are $0$.)

For every subset $K$ of $\left[ n\right] $, we define $e_K\in \wedge^{\left| K\right| }\left( \mathbb{K}^{n}\right) $ to be the element $e_{k_{1}}\wedge e_{k_{2}}\wedge\cdots\wedge e_{k_{\left| K\right| }}$, where $K$ is written in the form $K=\left\{ k_{1} <k_{2}<\cdots<k_{\left| K\right| }\right\} $.

For every $k\in\mathbb{N}$ and every set $S$, we let $\mathcal{P}_{k}\left( S \right) $ denote the set of all $k$-element subsets of $S$.

It is well-known that, for every $k\in\mathbb{N}$, the family $\left( e_K\right) _{K\in\mathcal{P}_{k}\left( \left[ n\right] \right) }$ is a basis of the $\mathbb{K}$-module $\wedge^{k}\left( \mathbb{K}^{n}\right) $. Applying this to $k=n$, we conclude that the family $\left( e_K\right) _{K\in\mathcal{P}_{n}\left( \left[ n\right] \right) }$ is a basis of the $\mathbb{K}$-module $\wedge^{n}\left( \mathbb{K}^{n}\right) $. Since this family $\left( e_K\right) _{K\in\mathcal{P}_{n}\left( \left[ n\right] \right) }$ is the one-element family $\left( e_{\left[ n\right] }\right) $ (because the only $K\in\mathcal{P}_{n}\left( \left[ n\right] \right) $ is the set $\left[ n\right] $), this rewrites as follows: The one-element family $\left( e_{\left[ n\right] }\right) $ is a basis of the $\mathbb{K}$-module $\wedge^{n}\left( \mathbb{K}^{n}\right) $.

If $B$ is an $n\times n$-matrix and $k\in\mathbb{N}$, then evaluating the map $\wedge^{k}f_{B}$ on the elements of the basis $\left( e_K\right) _{K\in\mathcal{P}_{k}\left( \left[ n\right] \right) }$ of $\wedge ^{k}\left( \mathbb{K}^{n}\right) $, and expanding the results again in this basis gives rise to coefficients which are the $k\times k$-minors of $B$. More precisely:

Proposition 3. Let $B\in\mathbb{K}^{n\times n}$, $k\in\mathbb{N}$ and $J\in\mathcal{P}_{k}\left( \left[ n\right] \right) $. Then, \begin{align} \left( \wedge^{k}f_{B}\right) \left( e_{J}\right) = \sum\limits_{I\in\mathcal{P}_{k}\left( \left[ n\right] \right) }\det\left( B_{J} ^{I}\right) e_{I} . \end{align}

(This can be generalized: If $u\in\mathbb{N}$, $v \in \mathbb{N}$, $B\in\mathbb{K}^{u\times v}$, $k\in\mathbb{N}$ and $J\in\mathcal{P}_{k}\left( \left[ v\right] \right) $, then $\left( \wedge^{k}f_{B}\right) \left( e_{J}\right) = \sum\limits_{I\in\mathcal{P}_{k}\left( \left[ u\right] \right) }\det\left( B_{J} ^{I}\right) e_{I}$, where the elements $e_{J}\in\wedge^{k}\left( \mathbb{K}^{v}\right) $ and $e_{I}\in\wedge^{k}\left( \mathbb{K}^{u}\right) $ are defined as before but with $v$ and $u$ instead of $n$.)

Extracting minors from the exterior algebra

Now, we shall need a simple lemma:

Lemma 4. Let $K$ be a subset of $\left[ n\right] $. Then, \begin{align} e_K e_{\widetilde{K}}=\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }e_{\left[ n\right] } . \end{align}

Proof of Lemma 4. Let $k = \left|K\right|$. Let $\sigma$ be the permutation $w\left( K\right) \in S_n$ defined above. Its first $k$ values $\sigma\left( 1\right) ,\sigma\left( 2\right) ,\ldots,\sigma\left( k\right) $ are the elements of $K$ in increasing order; thus, $e_{\sigma\left( 1\right) }\wedge e_{\sigma\left( 2\right) }\wedge\cdots\wedge e_{\sigma\left( k\right) }=e_K$. Its next $n-k$ values $\sigma\left( k+1\right) ,\sigma\left( k+2\right) ,\ldots,\sigma\left( n\right) $ are the elements of $\widetilde{K}$ in increasing order; thus, $e_{\sigma\left( k+1\right) }\wedge e_{\sigma\left( k+2\right) }\wedge\cdots\wedge e_{\sigma\left( n\right) }=e_{\widetilde{K}}$.

From $\sigma=w\left( K\right) $, we obtain $\left( -1\right) ^{\sigma }=\left( -1\right) ^{w\left( K\right) }=\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }$ (by Lemma 2).

Now, it is well-known that \begin{align} e_{\sigma\left( 1\right) }\wedge e_{\sigma\left( 2\right) }\wedge \cdots\wedge e_{\sigma\left( n\right) }=\left( -1\right) ^{\sigma }\underbrace{e_{1}\wedge e_{2}\wedge\cdots\wedge e_{n}}_{=e_{\left[ n\right] }}=\left( -1\right) ^{\sigma}e_{\left[ n\right] } . \end{align} Hence, \begin{align} \left( -1\right) ^{\sigma}e_{\left[ n\right] } & = e_{\sigma\left( 1\right) }\wedge e_{\sigma\left( 2\right) }\wedge\cdots\wedge e_{\sigma\left( n\right) } \\ & = \underbrace{\left( e_{\sigma\left( 1\right) }\wedge e_{\sigma\left( 2\right) }\wedge\cdots\wedge e_{\sigma\left( k\right) }\right) }_{=e_K }\underbrace{\left( e_{\sigma\left( k+1\right) }\wedge e_{\sigma\left( k+2\right) }\wedge\cdots\wedge e_{\sigma\left( n\right) }\right) }_{=e_{\widetilde{K}}} \\ & = e_K e_{\widetilde{K}} . \end{align} Since $\left( -1\right) ^{\sigma}=\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }$, this rewrites as $\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }e_{\left[ n\right] }= e_K e_{\widetilde{K}}$. This proves Lemma 4. $\blacksquare$

We can combine Proposition 3 and Lemma 4 to obtain the following fact:

Corollary 5. Let $B\in\mathbb{K}^{n\times n}$, $k\in\mathbb{N}$ and $J\in\mathcal{P}_{k}\left( \left[ n\right] \right) $. Then, every $K\in\mathcal{P}_{k}\left( \left[ n\right] \right) $ satisfies \begin{align} \left( \wedge^{k}f_{B}\right) \left( e_{J}\right) e_{\widetilde{K} }=\left( -1\right) ^{\sum K-\left( 1+2+\cdots+k\right) }\det\left( B_{J}^{K}\right) e_{\left[ n\right] } . \end{align}

Proof of Corollary 5. Let $K \in \mathcal{P}_{k}\left( \left[ n\right] \right) $. Let $I\in\mathcal{P}_{k}\left( \left[ n\right] \right) $ be such that $I\neq K$. Then, $I\not \subseteq K$ (since the sets $I$ and $K$ have the same size $k$). Hence, there exists some $z\in I$ such that $z\notin K$. Consider this $z$. We have $z\in I$ and $z\in\widetilde{K}$ (since $z\notin K$). Hence, both $e_{I}$ and $e_{\widetilde{K}}$ are "wedge products" containing the factor $e_{z}$; therefore, the product $e_{I} e_{\widetilde{K}}$ is a "wedge product" containing this factor twice. Thus, $e_{I}e_{\widetilde{K}}=0$.

Now, forget that we fixed $I$. We thus have proven that \begin{align} e_{I}e_{\widetilde{K}}=0 \text{ for every } I\in\mathcal{P}_{k}\left( \left[ n\right] \right) \text{ satisfying } I\neq K . \end{align} Hence, \begin{align} \sum\limits_{\substack{I\in\mathcal{P}_{k}\left( \left[ n\right] \right) ;\\I\neq K}}\det\left( B_{J}^{I}\right) \underbrace{e_{I}e_{\widetilde{K}} }_{=0}=0 . \label{darij1.eq4} \tag{4} \end{align} Proposition 3 yields \begin{align} \left( \wedge^{k}f_{B}\right) \left( e_{J}\right) =\sum\limits_{I\in \mathcal{P}_{k}\left( \left[ n\right] \right) }\det\left( B_{J} ^{I}\right) e_{I} . \end{align} Multiplying both sides of this equality by $e_{\widetilde{K}}$ from the right, we find \begin{align} & \left( \wedge^{k}f_{B}\right) \left( e_{J}\right) e_{\widetilde{K}} =\sum\limits_{I\in\mathcal{P}_{k}\left( \left[ n\right] \right) }\det\left( B_{J}^{I}\right) e_{I}e_{\widetilde{K}} \\ & = \det\left( B_{J}^{K}\right) e_K e_{\widetilde{K}}+\sum\limits_{\substack{I\in \mathcal{P}_{k}\left( \left[ n\right] \right) ;\\I\neq K}}\det\left( B_{J}^{I}\right) e_{I}e_{\widetilde{K}} \\ & = \det\left( B_{J}^{K}\right) \underbrace{e_K e_{\widetilde{K}} }_{\substack{=\left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }e_{\left[ n\right] }\\\text{(by Lemma 4)}}} \qquad \text{(by \eqref{darij1.eq4})} \\ & = \left( -1\right) ^{\sum K-\left( 1+2+\cdots+\left| K\right| \right) }\det\left( B_{J}^{K}\right) e_{\left[ n\right] } \\ & = \left( -1\right) ^{\sum K-\left( 1+2+\cdots+k\right) }\det\left( B_{J}^{K}\right) e_{\left[ n\right] } \end{align} (since $\left| K\right| =k$). This proves Corollary 5. $\blacksquare$

Corollary 5 is rather helpful when it comes to extracting a specific minor of a matrix $B$ from the maps $\wedge^{k}f_{B}$.

Proof of Theorem 1

Proof of Theorem 1. Set $k=\left| I\right| =\left| J\right| $. Notice that $\left| \widetilde{I}\right| =n-k$ (since $\left| I\right| =k$) and $\left| \widetilde{J}\right| =n-k$ (similarly).

Define $y\in\wedge ^{n-k}\left( \mathbb{K}^{n}\right) $ by $y=\left( \wedge^{n-k}f_{A^{-1} }\right) \left( e_{\widetilde{I}}\right) $.

The maps $f_{A}$ and $f_{A^{-1}}$ are mutually inverse (since the map $\mathbb{K}^{n\times n}\rightarrow\operatorname*{End}\left( \mathbb{K} ^{n}\right) ,\ B\mapsto f_{B}$ is a ring homomorphism). Hence, the maps $\wedge^{n-k}f_{A}$ and $\wedge^{n-k}f_{A^{-1}}$ are mutually inverse (since $\wedge^{n-k}$ is a functor). Thus, $\left( \wedge^{n-k}f_{A}\right) \circ\left( \wedge^{n-k}f_{A^{-1}}\right) =\operatorname*{id}$. Now, $y=\left( \wedge^{n-k}f_{A^{-1}}\right) \left( e_{\widetilde{I}}\right) $, so that \begin{align} \left( \wedge^{n-k}f_{A}\right) \left( y\right) =\left( \wedge ^{n-k}f_{A}\right) \left( \left( \wedge^{n-k}f_{A^{-1}}\right) \left( e_{\widetilde{I}}\right) \right) =\underbrace{\left( \left( \wedge ^{n-k}f_{A}\right) \circ\left( \wedge^{n-k}f_{A^{-1}}\right) \right) }_{=\operatorname*{id}}\left( e_{\widetilde{I}}\right) =e_{\widetilde{I}} . \end{align} But \eqref{darij1.eq1} (applied to $V=\mathbb{K}^{n}$, $W=\mathbb{K}^{n}$, $f=f_{A}$, $\ell=n-k$, $a=e_{J}$ and $b=y$) yields \begin{align} \left( \wedge^{k}f_{A}\right) \left( e_{J}\right) \cdot\left( \wedge^{n-k}f_{A}\right) \left( y\right) =\left( \wedge^{n}f_{A}\right) \left( e_{J}y\right) . \end{align} Thus, \begin{align} & \left( \wedge^{n}f_{A}\right) \left( e_{J}y\right) =\left( \wedge ^{k}f_{A}\right) \left( e_{J}\right) \cdot\underbrace{\left( \wedge ^{n-k}f_{A}\right) \left( y\right) }_{=e_{\widetilde{I}}} \\ & =\left( \wedge^{k}f_{A}\right) \left( e_{J}\right) e_{\widetilde{I}} = \left( -1\right) ^{\sum I-\left( 1+2+\cdots+k\right) }\det\left( A_J^I \right) e_{\left[ n\right] } \end{align} (by Corollary 5, applied to $B=A$ and $K=I$).

Hence, \begin{align} & \left( -1\right) ^{\sum I-\left( 1+2+\cdots+k\right) }\det\left( A_J^I \right) e_{\left[ n\right] } \\ & = \left( \wedge^{n}f_{A}\right) \left( e_{J}y\right) =\underbrace{\left( \det f_{A}\right) }_{=\det A}e_{J}y \\ & \qquad \text{(by \eqref{darij1.eq3}, applied to $f=f_{A}$ and $z=e_{J}y$)} \\ & = \left( \det A\right) e_{J}y . \label{darij1.eq5} \tag{5} \end{align} But \eqref{darij1.eq2} (applied to $\ell=n-k$, $u=e_{J}$ and $v=y$) yields \begin{align} & e_{J} y = \left(-1\right)^{k \left(n-k\right)} \underbrace{y}_{=\left( \wedge^{n-k}f_{A^{-1}}\right) \left( e_{\widetilde{I}}\right) } \underbrace{e_{J}} _{=e_{\widetilde{\widetilde{J}}}} \\ & =\left( -1\right) ^{k\left( n-k\right) }\underbrace{\left( \wedge ^{n-k}f_{A^{-1}}\right) \left( e_{\widetilde{I}}\right) e_{\widetilde{\widetilde{J}}}}_{\substack{=\left( -1\right) ^{\sum \widetilde{J}-\left( 1+2+\cdots+\left( n-k\right) \right) }\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J}}\right) e_{\left[ n\right] }\\\text{(by Corollary 5, applied to }A^{-1}\text{, }n-k\text{, }\widetilde{I}\text{ and }\widetilde{J}\\\text{instead of }B\text{, }k\text{, }J\text{ and }K\text{)}}} \\ & =\left( -1\right) ^{k\left( n-k\right) }\left( -1\right) ^{\sum \widetilde{J}-\left( 1+2+\cdots+\left( n-k\right) \right) }\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J}}\right) e_{\left[ n\right] } \\ & =\left( -1\right) ^{k\left( n-k\right) +\sum\widetilde{J}-\left( 1+2+\cdots+\left( n-k\right) \right) }\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J}}\right) e_{\left[ n\right] } . \label{darij1.eq6} \tag{6} \end{align} But \begin{align} & k\left( n-k\right) +\underbrace{\sum\widetilde{J}}_{=\sum\left\{ 1,2,\ldots,n\right\} -\sum J}-\underbrace{\left( 1+2+\cdots+\left( n-k\right) \right) }_{=\sum\left\{ 1,2,\ldots,n-k\right\} } \\ & = k\left( n-k\right) +\sum\left\{ 1,2,\ldots,n\right\} -\sum J-\sum\left\{ 1,2,\ldots,n-k\right\} \\ & = \underbrace{\sum\left\{ 1,2,\ldots,n\right\} -\sum\left\{ 1,2,\ldots ,n-k\right\} }_{\substack{=\sum\left\{ n-k+1,n-k+2,\ldots,n\right\} \\=k\left( n-k\right) +\sum\left\{ 1,2,\ldots,k\right\} \\=k\left( n-k\right) +\left( 1+2+\cdots+k\right) }}+k\left( n-k\right) -\sum J \\ & = 2k\left( n-k\right) +\left( 1+2+\cdots+k\right) -\sum J \\ & \equiv-\left( 1+2+\cdots+k\right) -\sum J \mod 2 . \end{align} Hence, \begin{align} \left( -1\right) ^{k\left( n-k\right) +\sum\widetilde{J}-\left( 1+2+\cdots+\left( n-k\right) \right) }=\left( -1\right) ^{-\left( 1+2+\cdots+k\right) -\sum J} . \end{align} Thus, \eqref{darij1.eq6} rewrites as \begin{align} e_{J}y=\left( -1\right) ^{-\left( 1+2+\cdots+k\right) -\sum J}\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J}}\right) e_{\left[ n\right] } . \end{align} Hence, \eqref{darij1.eq5} rewrites as \begin{align} & \left( -1\right) ^{\sum I-\left( 1+2+\cdots+k\right) }\det\left( A_J^I \right) e_{\left[ n\right] } \\ & = \left( \det A\right) \left( -1\right) ^{-\left( 1+2+\cdots+k\right) -\sum J}\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J} }\right) e_{\left[ n\right] } . \end{align} We can "cancel" $e_{\left[ n\right] }$ from this equality (because if $\lambda$ and $\mu$ are two elements of $\mathbb{K}$ satisfying $\lambda e_{\left[ n\right] }=\mu e_{\left[ n\right] }$, then $\lambda=\mu$), and thus obtain \begin{align} & \left( -1\right) ^{\sum I-\left( 1+2+\cdots+k\right) }\det\left( A_J^I \right) \\ & = \left( \det A\right) \left( -1\right) ^{-\left( 1+2+\cdots+k\right) -\sum J}\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J} }\right) . \end{align} Dividing this equality by $\left( -1\right) ^{\sum I-\left( 1+2+\cdots +k\right) }$, we obtain \begin{align} & \det\left( A_J^I \right) \\ & = \left(\det A\right) \dfrac{\left( -1\right) ^{-\left( 1+2+\cdots+k\right) -\sum J}}{\left( -1\right) ^{\sum I-\left( 1+2+\cdots+k\right) }}\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J}}\right) \\ & = \left( -1\right) ^{\sum I+\sum J}\det A\cdot\det\left( \left( A^{-1}\right) _{\widetilde{I}}^{\widetilde{J}}\right) . \end{align} This proves Theorem 1. $\blacksquare$

I have to admit this proof looked far shorter on the scratch paper on which it was conceived than it has ended up here...


This is nothing but the Schur complement formula. See my book Matrices; Theory and Applications, 2nd ed., Springer-Verlag GTM 216, page 41.

Up to some permutation of rows and columns, we may assume that $I=J=[1,p]$. Let us write blockwise $$A=\begin{pmatrix} W & X \\\\ Y & Z \end{pmatrix}.$$ Assume WLOG that $W$ is invertible. On the one hand, we have (Schur C.F) $$\det A=\det W\cdot\det(Z-YW^{-1}X).$$ Finally, we have $$A^{-1}=\begin{pmatrix} \cdot & \cdot \\\\ \cdot & (Z-YW^{-1}X)^{-1} \end{pmatrix},$$which gives the desired result.

These formulas are obtained by factorizing $A$ into $LU$ (namely, $L= \begin{pmatrix} I_* & 0 \\ YW^{-1} & I_* \end{pmatrix}$ and $U = \begin{pmatrix} W & X \\ 0 & Z-YW^{-1}X \end{pmatrix}$, with the $I_*$ being identity matrices of appropriate size).