Diagonalization: Can you spot a trick to avoid tedious computation?

This answer is a more detailed walk through of the method outlined by @yixing in the comments. I present it more verbosely than I would on the exam for the aid of anyone else who reads it.

From part 2 of the problem we have already discovered that $M$ has 3 distinct eigenvalues (thus diagonalizable) and also have found the characteristic polynomial $p_m(t) = x(2-x)(3-x).$ Since $k[t]$ is a Euclidean domain, we know that for the polynomials $t^{2013}$ and $p_m(t)$, there exists $q(t)$ and $r(t)$ such that $$t^{2013} = p_m(t)q(t) + r(t), \hspace{5mm}(1)$$ where $\mathrm{deg}(r(t)) < \mathrm{deg}(p_m(t)).$ Note that since the degree of the characteristic polynomial is 3, $r(t)$ is at most a quadratic, $r(t) = at^2 + bt + c.$ The degree of $r(t)$ being at most two will ultimately be the crux of why this method is computationally superior to diagonalizing the matrix $M$, for evaluating both sides of (1) at the matrix M yields $$M^{2013} = p_m(M)q(M) + r(M),$$ thus $$M^{2013} = r(M) = aM^2 + bM + cI.$$ This final line can be justified either by invoking the Cayley-Hamilton theorem, or by noting that since the matrix is diagonalizable with 3 distinct eigenvalues, the characteristic polynomial and minimal polynomial are equivalent and by definition the minimal polynomial vanishes on $M$. Thus if we solve for the coefficients of $r(t)$, the desired matrix / vector multiplication can be computed easily. By evaluating line (1) at t = 0,2,3 we discover that $c = 0$, then get an 'easy to solve' 2x2 system for $a,b$ yielding $a = 3^{2012} - 2^{2012}$, $b = 2^{2012} + 2^{2013} - 2\cdot 3^{2012}$.

Now let $v = (1,1,1)^{T}$, then it is easily computed mentally that $Mv = (2,7,-3)^T$ and $M^2v = (4,24,-11)^T$. So we have \begin{align*} M^{2013}\cdot v &= aM^2v + bMv\\ &= a(2,7,-3)^T + b(4,-24,-11)^T.\\ \end{align*}

Since the values for $a,b$ were found above, I see no reason to simplify further.

A huge thanks to @yixing for the suggestion of this method. The method is considerably faster than computing an inverse by hand, and is actually quite robust.


Here is a slightly different approach. In part 2, you have already determined that $M$ has three distinct eigenvalues $2, 3, 0$. Therefore $M$ is diagonalisable and by Lagrange interpolation, \begin{align} M^{2013} &=2^{2013}\frac{M-0}{2-0}\frac{M-3}{2-3} + 3^{2013}\frac{M-2}{3-2}\frac{M-0}{3-0}\\ &=3^{2012}(M^2-2M)-2^{2012}(M^2-3M). \end{align} Now, by direct calculation, $$ u:=M\pmatrix{1\\ 1\\ 1}=\pmatrix{2\\ 7\\ -3}, \quad v:=M^2\pmatrix{1\\ 1\\ 1}=M\pmatrix{2\\ 7\\ -3}=\pmatrix{4\\ 24\\ -11}. $$ Therefore \begin{align} M^{2013}\pmatrix{1\\ 1\\ 1} &=3^{2012}(v-2u)-2^{2012}(v-3u)\\ &=3^{2012}\pmatrix{0\\ 10\\ -5}-2^{2012}\pmatrix{-2\\ 3\\ -2}. \end{align} Neither $M^{2013}$ nor any eigenvector of $M$ are needed here. There is also no need to solve any messy system of equations. Computations are few and much less error-prone.