Computing polynomial eigenvalues in Mathematica
(I've been waiting for somebody to ask this question for months... :D )
Here's the Mathematica implementation of the Frobenius companion matrix approach discussed by Jim Wilkinson in his venerable book (for completeness and complete analogy with built-in functions, I provide these three):
PolynomialEigenvalues[matCof : {__?MatrixQ}] :=
Module[{p = Length[matCof] - 1, n = Length[First[matCof]]},
Eigenvalues[{ArrayFlatten[
Prepend[NestList[RotateRight, PadRight[{IdentityMatrix[n]}, p],
p - 2], -Rest[matCof]]],
SparseArray[{Band[{1, 1}] -> First[matCof], {k_, k_} -> 1}, {n p, n p}]}]
] /; Precision[matCof] < Infinity && SameQ @@ (Dimensions /@ matCof)
PolynomialEigenvectors[matCof : {__?MatrixQ}] :=
Module[{p = Length[matCof] - 1, n = Length[First[matCof]]},
Map[Take[#, n] &,
Eigenvectors[{ArrayFlatten[
Prepend[NestList[RotateRight, PadRight[{IdentityMatrix[n]}, p],
p - 2], -Rest[matCof]]],
SparseArray[{Band[{1, 1}] -> First[matCof], {k_, k_} -> 1}, {n p, n p}]}]]
] /; Precision[matCof] < Infinity && SameQ @@ (Dimensions /@ matCof)
PolynomialEigensystem[matCof : {__?MatrixQ}] :=
Module[{p = Length[matCof] - 1, n = Length[First[matCof]]},
MapAt[Map[Take[#, n] &, #] &,
Eigensystem[{ArrayFlatten[
Prepend[NestList[RotateRight, PadRight[{IdentityMatrix[n]}, p],
p - 2], -Rest[matCof]]],
SparseArray[{Band[{1, 1}] -> First[matCof], {k_, k_} -> 1}, {n p, n p}]}], 2]
] /; Precision[matCof] < Infinity && SameQ @@ (Dimensions /@ matCof)
Here's how to verify that they work as expected:
m = (* matrix dimensions *);
n = (* degree of matrix polynomial *);
pcofs = Table[RandomReal[{-9, 9}, {m, m}, WorkingPrecision -> 20], {n + 1}];
(* should return an array of zeros *)
MapThread[Function[{λ, \[ScriptV]}, Chop[Fold[#1 λ + #2 &, 0, pcofs].\[ScriptV]]],
PolynomialEigensystem[pcofs]]
(* should return an array of zeros *)
Table[Det[Fold[#1 λ + #2 &, 0, pcofs]] // Chop, {λ, PolynomialEigenvalues[pcofs]}]
There are more efficient ways to solve, say, the quadratic eigenvalue problem if the coefficient matrices have a nice structure (see this, for instance), but at least the method here, based on the QZ algorithm, is general.
As I've already mentioned in my previous answer, the method of using the QZ algorithm for matrix pencils on the Frobenius companion linearization of the polynomial eigenproblem is not always the most efficient approach. To illustrate this, I'll outline a general method for solving a hyperbolic quadratic eigenvalue problem, which is known to have all its eigenvalues real. (Encapsulating the strategy as a working Mathematica routine is left as an exercise.)
The method also makes use of a linearization; this linearization, as described in this paper by Higham, Mackey, Mackey, and Tisseur, makes use of block symmetric matrices whose blocks are block Hankel and block antitriangular. A Mathematica routine for constructing the matrix $X_m(P(\lambda))$ (in the notation of section 3.3 of that paper) follows:
BlockSymmetricBasis[k_Integer?NonNegative, matCof : {__?MatrixQ}] :=
Module[{p = Length[matCof] - 1, lm, um},
Switch[k,
0, -ArrayFlatten[Partition[PadRight[Take[matCof, -p], 2 p - 1], p, 1]],
p, ArrayFlatten[Partition[PadLeft[Take[matCof, p], 2 p - 1], p, 1]],
_,
lm = ArrayFlatten[Partition[PadLeft[Take[matCof, k], 2 k - 1], k, 1]];
um = ArrayFlatten[Partition[PadRight[Take[matCof, k - p], 2 (p - k) - 1], p - k, 1]];
ArrayFlatten[{{lm, 0}, {0, -um}}]]] /;
SameQ @@ (Dimensions /@ matCof) && k < Length[matCof]
From here, one now has a family of pencils $X_{m-1}(P(\lambda))-\lambda X_m(P(\lambda))$ at disposal; the key is to choose among these pencils such that $X_m$ is positive definite (i.e., we want to pick out which of the $X_{m-1}(P(\lambda))-\lambda X_m(P(\lambda))$ is a symmetric-definite pencil).
To continue further with the discussion, here is a concrete example of a hyperbolic quadratic eigenvalue problem:
polycof = N@{{{3, 2, 1}, {2, 3, 2}, {1, 2, 3}},
{{-2, -1, -1}, {-1, -3, 2}, {-1, 2, -1}},
{{-5, 1, -2}, {1, -4, -3}, {-2, -3, -5}}};
We check which of the $X_m$ are positive definite:
Table[PositiveDefiniteMatrixQ[BlockSymmetricBasis[k, polycof]], {k, 0, 2}]
{False, True, False}
We thus continue further with the pencil $X_0-\lambda X_1$. Now, we check the condition number $\kappa$ of $X_1$:
X0 = BlockSymmetricBasis[0, polycof]; X1 = BlockSymmetricBasis[1, polycof];
LinearAlgebra`MatrixConditionNumber[X1]
22.2727
The value of the condition number obtained is rather modest in size, so we can continue with one of the usual approaches for symmetric-definite pencils, which uses Cholesky decomposition. First, build the intermediate matrix:
ℳ = CholeskyDecomposition[X1]; lft = LinearSolve[Transpose[ℳ]];
ℋ = lft[Transpose[lft[X0]]];
Here are the eigenvalues:
λ = Eigenvalues[ℋ]
{6.61035, -1.8856, 1.37725, 1.21165, -1.06445, -0.124207}
Check the eigenvalues:
Table[Det[Fold[#1 λ + #2 &, 0, polycof]] // Chop, {λ, %}]
{3.47379*10^-10, 0, 0, 0, 0, 0}
Here are the eigenvectors:
lf = LinearSolve[ℳ];
\[ScriptV] = Take[lf[#], 3] & /@ Eigenvectors[ℋ]
{{-0.46788, 0.903438, -0.600705}, {0.420022, -0.335861, -0.176273},
{-0.425712, -0.113381, 0.331892}, {-0.0699161, -0.208497, -0.204744},
{0.196805, -0.0360833, 0.26503}, {0.069413, 0.115626, -0.103772}}
Check eigenvalues and eigenvectors:
MapThread[Function[{λ, \[ScriptV]}, Chop[Fold[#1 λ + #2 &, 0, polycof].\[ScriptV]]],
{λ, \[ScriptV]}]
{{0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0}, {0, 0, 0}}
Had the result of LinearAlgebra`MatrixConditionNumber[X1]
been rather large (say, $\approx 10^7$), an alternative route would have been the eigendecomposition of X1
, which would have gone like this:
ℳ = #2.#1 & @@ MapThread[#1@#2 &, {{Composition[DiagonalMatrix, Sqrt], Transpose},
Eigensystem[X1]}];
lf = LinearSolve[ℳ];
ℋ = lf[Transpose[lf[X0]]]
λ = Eigenvalues[ℋ] (* eigenvalues *)
lft = LinearSolve[Transpose[ℳ]];
\[ScriptV] = Take[lft[#], 3] & /@ Eigenvectors[ℋ] (* eigenvectors *)
See this reference for more information on definite matrix polynomials, which is the general class of matrix polynomials with real eigenvalues.
To summarize: the Frobenius linearization + QZ route works generally, but one should be on the lookout for methods that exploit structure if you will be solving a lot of structured problems, since they will usually be more efficient with space, computational effort, or both.
[Not really a viable answer but too long for a comment]
Here is a method that is not well suited for numerical matrices, at least not in the first steps. But it is conceptually quite simple. The idea is that the values of lambda for which we get nontrivial null vectors are precisely those that make the determinant of the polynomial matrix vanish.
The code below uses an interpolation method to compute this determinant. That is not (conceptually, once again) needed. But it works around the limited heuristics of built-in Det.
interpDet[mat_, lam_, deg_] := Module[
{vals, mats},
mats = Table[mat /. lam -> j, {j, Length[mat]*deg + 1}];
vals = Det /@ mats;
InterpolatingPolynomial[vals, lam]]
The main body of code is quite short. Here I allow for working with finite precision in the eigenvector step. This is purely for speed purposes.
polyEigensystem[matrices : {__?MatrixQ}, prec_: Infinity] := Module[
{lam, n = Length[matrices], mat, det, eigvals, eigvecs},
mat = Plus @@ (matrices*lam^Range[n - 1, 0, -1]);
det = interpDet[mat, lam, n - 1]; (*could use Det[] instead...*)
eigvals = Union[Solve[det == 0, lam]];
eigvecs = Map[NullSpace, N[mat /. eigvals, prec]];
{lam /. eigvals, Flatten[eigvecs, 1]}]
Borrowing the testing code from JM:
m = 12;
n = 3;
SeedRandom[11111];
pcofs = Table[RandomInteger[{-9, 9}, {m, m}], {n + 1}];
Quick test. Though nowhere near as quick as JM's code. But it does give exact eigenvalues.
Timing[e1 = polyEigensystem[pcofs, 20];]
Out[490]= {1.15, Null}
In[500]:= e1[[1]] // N // Sort
(* Out[500]= {-4.90804, -1.22881, -1.08631 - 0.460071 I, -1.08631 +
0.460071 I, -1.00005 - 0.536795 I, -1.00005 +
0.536795 I, -0.812783 - 1.17189 I, -0.812783 +
1.17189 I, -0.613395 - 0.983785 I, -0.613395 +
0.983785 I, -0.5483, -0.216034 - 0.114503 I, -0.216034 +
0.114503 I, -0.0949477 - 0.755516 I, -0.0949477 +
0.755516 I, -0.0475538 - 1.59625 I, -0.0475538 + 1.59625 I,
0.134809 - 0.691959 I, 0.134809 + 0.691959 I, 0.266427 - 1.00571 I,
0.266427 + 1.00571 I, 0.336446, 0.408772 - 0.625837 I,
0.408772 + 0.625837 I, 0.564009 - 0.729785 I, 0.564009 + 0.729785 I,
0.605961 - 0.0502088 I, 0.605961 + 0.0502088 I, 0.60609 - 0.527523 I,
0.60609 + 0.527523 I, 0.722651 - 1.9656 I, 0.722651 + 1.9656 I,
0.956883 - 0.0966356 I, 0.956883 + 0.0966356 I, 1.23438 - 1.22286 I,
1.23438 + 1.22286 I} *)
Here is the result from JM's code:
Timing[e2 = PolynomialEigensystem[N[pcos, 20]];]
Out[495]= {0.6, Null}
In[501]:= e2[[1]] // N // Sort
(* Out[501]= {-4.90804, -1.22881, -1.08631 - 0.460071 I, -1.08631 +
0.460071 I, -1.00005 - 0.536795 I, -1.00005 +
0.536795 I, -0.812783 - 1.17189 I, -0.812783 +
1.17189 I, -0.613395 - 0.983785 I, -0.613395 +
0.983785 I, -0.5483, -0.216034 - 0.114503 I, -0.216034 +
0.114503 I, -0.0949477 - 0.755516 I, -0.0949477 +
0.755516 I, -0.0475538 - 1.59625 I, -0.0475538 + 1.59625 I,
0.134809 - 0.691959 I, 0.134809 + 0.691959 I, 0.266427 - 1.00571 I,
0.266427 + 1.00571 I, 0.336446, 0.408772 - 0.625837 I,
0.408772 + 0.625837 I, 0.564009 - 0.729785 I, 0.564009 + 0.729785 I,
0.605961 - 0.0502088 I, 0.605961 + 0.0502088 I, 0.60609 - 0.527523 I,
0.60609 + 0.527523 I, 0.722651 - 1.9656 I, 0.722651 + 1.9656 I,
0.956883 - 0.0966356 I, 0.956883 + 0.0966356 I, 1.23438 - 1.22286 I,
1.23438 + 1.22286 I} *)