How could we define the factorial of a matrix?

For any holomorphic function $G$, we can define a corresponding matrix function $\tilde{G}$ via (a formal version of) the Cauchy Integral Formula: We set $$\tilde{G}(B) := \frac{1}{2 \pi i} \oint_C G(z) (z I - B)^{-1} \, dz ,$$ where $C$ is an (arbitrary) anticlockwise curve that encloses (once each) the eigenvalues of the (square) matrix $B$. Note that the condition on $C$ means that restrictions on the domain of $G$ determine restrictions on the domain of $\tilde{G}$.

So, we could make sense of the factorial of a matrix if we had a holomorphic function that restricted to the factorial function $n \mapsto n!$ on nonnegative integers. Fortunately, there is such a function: The function $$F: z \mapsto \Gamma(z + 1),$$ where $\Gamma$ denotes the Gamma function, satisfies $F(n) = n!$ for nonnegative integers $n$. (There is a sense in which $F$ is the best possible function extending the factorial function, but notice the target of that link really just discusses the real Gamma function, which our $\Gamma$ preferentially extends.) Thus, we may define factorial of a (square) matrix $B$ by substituting the second display equation above into the first: $$\color{#df0000}{\boxed{B! := \tilde{F}(B) = \frac{1}{2 \pi i} \oint_C \Gamma(z + 1) (z I - B)^{-1} \, dz}} .$$

The (scalar) Cauchy Integral Formula shows that this formulation has the obviously desirable property that for scalar matrices it recovers the usual factorial, or more precisely, that $\pmatrix{n}! = \pmatrix{n!}$ (for nonnegative integers $n$).

Alternatively, one could define a matrix function $\tilde G$ (and in particular define $B!$) by evaluating formally the power series $\sum_{i = 0}^{\infty} a_k (z - z_0)^k$ for $G$ about some point $z_0$, that is, declaring $\tilde G(B) := \sum_{i = 0}^{\infty} a_k (B - z_0 I)^k$, but in general this definition is more restrictive than the Cauchy Integral Formula definition, simply because the power series need not converge everywhere (where it does converge, it converges to the value given by the integral formula). Indeed, we cannot use a power series for $F$ to evaluate $A!$ directly for our particular $A$: The function $F$ has a pole on the line segment in $\Bbb C$ with endpoints the eigenvalues of $A$, so there is no open disk in the domain of $F$ containing all of the eigenvalues of $A$, and hence there is no basepoint $z_0$ for which the series for $\tilde F$ converges at $A$.

We can define $\tilde G$ in yet another way, which coincides appropriately with the above definitions but which is more amenable to explicit computation: If $B$ is diagonalizable, so that we can decompose $$B = P \pmatrix{\lambda_1 & & \\ & \ddots & \\ & & \lambda_n} P^{-1} ,$$ for eigenvalues $\lambda_a$ of $B$ and some matrix $P$, we define $$\tilde{G}(B) := P \pmatrix{G(\lambda_1) & & \\ & \ddots & \\ & & G(\lambda_n)} P^{-1} .$$ Indeed, by substituting and rearranging, we can see that this coincides, at least formally, with the power series characterization. There is a similar but more complicated formula for nondiagonalizable $B$ that I won't write out here but which is given in the Wikipedia article Matrix function.

Example The given matrix $A$ has distinct eigenvalues $\lambda_{\pm} = 1 \pm \sqrt{6}$, and so can be diagonalized as $$P \pmatrix{1 - \sqrt{6} & 0 \\ 0 & 1 + \sqrt{6}} P^{-1} ;$$ indeed, we can take $$P = \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}}.$$

Now, $F(\lambda_{\pm}) = \Gamma(\lambda_{\pm} + 1) = \Gamma (2 {\pm} \sqrt{6}),$ and putting this all together gives that \begin{align*}\pmatrix{1 & 3 \\ 2 & 1} ! = \bar{F}(A) &= P \pmatrix{F(\lambda_-) & 0 \\ 0 & F(\lambda_+)} P^{-1} \\ &= \pmatrix{\tfrac{1}{2} & \tfrac{1}{2} \\ -\frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}}} \pmatrix{\Gamma (2 - \sqrt{6}) & 0 \\ 0 & \Gamma (2 + \sqrt{6})} \pmatrix{1 & -\frac{\sqrt{3}}{\sqrt{2}} \\ 1 & \frac{\sqrt{3}}{\sqrt{2}}} .\end{align*} Multiplying this out gives $$\color{#df0000}{\boxed{\pmatrix{1 & 3 \\ 2 & 1} ! = \pmatrix{\frac{1}{2} \alpha_+ & \frac{\sqrt{3}}{2 \sqrt{2}} \alpha_- \\ \frac{1}{\sqrt{6}} \alpha_- & \frac{1}{2} \alpha_+}}} ,$$ where $$\color{#df0000}{\alpha_{\pm} = \Gamma(2 + \sqrt{6}) \pm \Gamma(2 - \sqrt{6})}. $$

It's perhaps not very illuminating, but $A!$ has numerical value $$ \pmatrix{1 & 3 \\ 2 & 1}! \approx \pmatrix{3.62744 & 8.84231 \\ 5.89488 & 3.62744} . $$

To carry out these computations, one can use Maple's built-in MatrixFunction routine (it requires the LinearAlgebra package) to write a function that computes the factorial of any matrix:

MatrixFactorial := X -> LinearAlgebra:-MatrixFunction(X, GAMMA(z + 1), z);

To evaluate, for example, $A!$, we then need only run the following:

A := Matrix([[1, 3], [2, 1]]);
MatrixFactorial(A);

(NB executing this code returns an expression for $A!$ different from the one above: Their values can be seen to coincide using the the reflection formula $-z \Gamma(z) \Gamma(-z) = \frac{\pi}{\sin \pi z} .$ We can further simplify the expression using the identity $\Gamma(z + 1) = z \Gamma(z)$ extending the factorial identity $(n + 1)! = (n + 1) \cdot n!$ to write $\Gamma(2 \pm \sqrt{6}) = (6 \pm \sqrt{6}) \Gamma(\pm \sqrt{6})$ and so write the entries as expressions algebraic in $\pi$, $\sin(\pi \sqrt{6})$, and $\Gamma(\sqrt{6})$ alone. One can compel Maple to carry out these substitutions by executing simplify(map(expand, %)); immediately after executing the previous code.) To compute the numerical value, we need only execute evalf(%); immediately after the previous code.

By the way, we need not have that $\det B \neq 0$ in order to define $B!$. In fact, proceeding as above we find that the factorial of the (already diagonal) zero matrix is the identity matrix: $$0! = \pmatrix{\Gamma(1) & 0 \\ 0 & \Gamma(1)} = I .$$ Likewise using the formula for nondiagonalizable matrices referenced above together with a special identity gives that the factorial of the $2 \times 2$ Jordan block of eigenvalue $0$ is, somewhat amusingly, $$\pmatrix{0 & 1\\0 & 0} ! = \pmatrix{1 & -\gamma \\ 0 & 1} ,$$ where $\gamma$ is the Euler-Mascheroni constant.


The gamma function is analytic. Use the power series of it.

EDIT: already done: Some properties of Gamma and Beta matrix functions (maybe paywalled).


I don't have enough reputation points to comment on Travis' answer, but his numerical result is incorrect.

Using Julia I get

A = [1 3;2 1]
EVD = eigfact(A)
V = EVD[:vectors]
g = gamma(EVD[:values])
gammaA = V * diagm(g) * inv(V)
factA = A * gammaA

 3.62744  8.84231
 5.89488  3.62744

As long as cond(V) isn't too terrible, I've found the above procedure to be a practical way to evaluate arbitrary matrix functions.