An operator is semi-simple iff it is diagonalizable

We denote the algebraically closed ground field by $k$ and the given finite-dimensional $k$-vector space by $V$. There are at least two (equivalent) ways to use induction:


We can construct an increasing chain of subspaces $U_i \subseteq U_{i+1}$ of $V$ which are spanned by eigenvectors of $T$ as follows:

  • Since $k$ is algebraically closed there exists an eigenvector $v_1 \in V$ for $T$, so $U_1 := \langle v_1 \rangle$ is a $T$-invariant subspace of $V$. Since $T$ is semisimple there exists a $T$-invariant subspace $W_1 \subseteq V$ with $V = U_1 \oplus W_1$.

  • Since $k$ is algebraically closed there exists an eigenvector $v_2 \in V$ for the restriction $T|_{W_1}$. This is also an eigenvector for $T$, so $U_2 := \langle v_1, v_2 \rangle$ is a $T$-invariant subspace of $V$. Since $T$ is semisimple there exists a $T$-invariant subspace $W_2 \subseteq V$ with $V = U_2 \oplus W_2$.

By continuing this process we get decompositions \begin{align*} V = U_1 \oplus W_1 = U_2 \oplus W_2 = U_3 \oplus W_3 = \dotsb \end{align*} where the summand $U_i$ are given by $U_i = \langle v_1, v_2, \dotsc, v_i \rangle$ with $v_1, v_2, v_3, \dotsc \in V$ being eigenvectors of $T$. Since $v_{i+1} \notin U_i = \langle v_1, \dotsc, v_i \rangle$ it follows that these eigenvectors are linearly independent, so $\dim U_i = i$.

It follows that the above process terminates after $n = \dim V$ many steps with $U_n = V$ and $W_n = 0$. Then $v_1, \dotsc, v_n$ is a basis of $V$ consisting of eigenvectors of $T$.


For the second proof we will need the following lemma:

Lemma: Let $T \colon V \to V$ be semisimple and let $U \subseteq V$ be a $T$-invariant subspace. Then the restriction $T|_U$ is also semisimple.

Proof: Let $W \subseteq U$ be an invariant subspace. Then $W$ is also a $T$-invariant subspace of $V$, so there exists some $T$-invariant subspace $W' \subseteq V$ with $V = W \oplus W'$. Then $W'' := W' \cap U$ is a $T$-invariant subspace of $U$.

We then have that $U = W \oplus W''$, which can be seen as follows: We have that $$ W \cap W'' = W \cap W' \cap U = 0 $$ since $W \cap W' = 0$. To see that $U = W + W''$ note that we can write $u \in U$ as $u = w + w'$ with $w \in W$ and $w' \in W'$ since $V = W + W'$. Then $w' = u - w \in U + W \subseteq U$ (here we use that $W$ is a subspace of $U$), so $w' \in W' \cap U = W''$.

We now proceed by induction on $\dim V$. For $\dim V = 0$ there is nothing to do.

If $\dim V \geq 1$ then there exists an eigenvector $v_1 \in V$ for $T$ since $k$ is algebraically closed. Then $\langle v_1 \rangle$ is a $T$-invariant subspace, so there exists a $T$-invariant subspace $W \subseteq V$ such that $V = \langle v_1 \rangle \oplus W$. Note that $\dim W = \dim V - 1$ since the eigenvector $v$ is necessarily non-zero.

The restriction $T|_W$ is also semisimple by the above lemma. By induction there exists a basis $v_2, \dotsc, v_n$ of $W$ consisting of eigenvectors of $T|_W$. These are also eigenvectors of $T$, and since $V = \langle v_1 \rangle \oplus W$ we find that $v_1, v_2, \dotsc, v_n$ is a basis of $V$.


Regarding your question:

I understand that we can write $T = \bigoplus_i T_{\lambda_i}$ where $T_{\lambda_i} = T|_{U_{\lambda_i}}$ and $U_{\lambda_i}$ is the eigenspace spanned by one eigenvector, but why does that imply that $T$ is diagonalizable?

I assume that $U$ is the given vector space (which I denoted by $V$) and that $U_{\lambda}$ is the eigenspace of $T$ with respect to the eigenvalue $\lambda$.

If the quoted situation were to happen, then one can pick for each one-dimensional space $U_{\lambda_i}$ some non-zero vector $u_i \in U_{\lambda_i}$. Since $U_{\lambda_i}$ is one-dimensional we have that $U_{\lambda_i} = \langle u_i \rangle$, so $T u_i \in U_{\lambda_i}$ is a scalar multiple of $u_i$ (from your notation I assume that this scalar should by $\lambda_i$). The eigenvectors $(u_i)_i$ form a basis since $T = \bigoplus_i T_{\lambda_i}$.

Note however, that the above proofs (as well as the one given on wikipedia, which is essentially the second one), do not give the decomposition $T = \bigoplus_{\lambda \in k} T_\lambda$ where $T_\lambda = T|_{U_\lambda} = \lambda \operatorname{id}_{U_\lambda}$, but instead a decomposition $T = T_1 \oplus \dotsb \oplus T_n$ where $T_i = T|_{U_i}$ for one-dimensional subspaces $U_1, \dotsc, U_n \subseteq V$. You seem to mix up these two decompositions, but they coincide if and only if every non-trivial eigenspace is one-dimensional, which is not necessarily true.


If $T$ is semi-simple, then you can find a basis so that the matrix of $T$ is diagonal in the following way:

$T$ has an eigenvector $v_1 \in V$ to an eigenvalue $\lambda_1$. This gives us a $T$-invariant subspace generated by $v_1$: $U_1 :=\langle v_1\rangle$. Since $T$ is semi-simple, there is a T-invariant complement space $U_1^c$, that means $V = U_1 \oplus U_1^c$. Let $(b_1, ..., b_{n-1})$ be a basis of $U_1^c$, then $B_1 = (v_1, b_1, ..., b_{n-1})$ is a basis of $V$.

Now the important step is to make clear how the matrix of $T$ with respect to basis $B_1$ looks like. It consists of two blocks, namely a $\mathbb{C}^{1\times 1}$ submatrix (The restriction of $T$ on $U_1$) and a $\mathbb{C}^{(n-1)\times(n-1)}$ submatrix (The restriction of $T$ on $U_1^c$), assuming that $\mathbb{C}$ is your field. This is because both subspaces are $T$-invariant!

We have done the first step of the diagonalization. Now we take the restriction of $T$ on $U_1^c$, $T_1: U_1^c \to U_1^c$. This is given by the $(n-1)\times(n-1)$-block in the matrix. And so we continue with $T_1$ and so on until the we have found a basis in which the matrix of $T$ is diagonal.