Insights into linear algebra from abstract algebra

The "Rank-Nullity" Theorem from Linear Algebra can be viewed as a corollary of the First Isomorphism Theorem, which may be more intuitive.

Suppose $T:V\to V$ is a linear transformation. Then by First Isomorphism Theorem, $V/\ker T\cong T(V)$.

So $\dim V-\rm{Null}(T)=\rm{Rank}(T)$, which is the Rank-Nullity Theorem.

This may be more intuitive than the traditional Linear Algebra proof of Rank-Nullity Theorem (see https://en.wikipedia.org/wiki/Rank%E2%80%93nullity_theorem).


There are tons of ways that abstract algebra informs linear algebra; here is just one example. Suppose you have a vector space $V$ over a field $k$ with a linear map $T:V\to V$. Given a polynomial $p(x)$ with coefficients in $k$, you get a linear map $p(T):V\to V$. This makes $V$ a module over the ring $k[x]$ of polynomials with coefficients in $k$: given $p(x)\in k[x]$ and $v\in V$, the scalar multiplication $p(x)\cdot v$ is just $p(T)v$. In particular, multiplication by $x$ corresponds to the linear map $T$.

Conversely, given a $k[x]$-module $V$, it is a $k$-vector space by considering multiplication by constant polynomials, and multiplication by $x$ gives a $k$-linear map $T:V\to V$. So any $k[x]$-module $V$ can be thought of as a vector space together with a linear map $V\to V$, and this is inverse to the construction described in the previous paragraph.

So vector spaces $V$ together with a chosen linear map $V\to V$ are essentially the same thing as $k[x]$-modules. This is really powerful because $k[x]$ is a very nice ring: it is a principal ideal domain, and there is a very nice classification of all finitely generated modules over any principal ideal domain. This gives us a classification of all linear maps from a finite-dimensional vector space to itself, up to isomorphism. When you represent linear maps by matrices, "up to isomorphism" ends up meaning "up to conjugation". So this gives a classification of all $n\times n$ matrices over a field $k$, up to conjugation by invertible $n\times n$ matrices, called the rational canonical form. In the case that $k$ is algebraically closed (for instance, $k=\mathbb{C}$), you can go further and get the very powerful Jordan normal form from this classification.

Now of course, these canonical forms for matrices can be obtained without all this language of abstract algebra: you can formulate the arguments in this particular case purely in the language of matrices if you really want to. But the general framework provided by abstract algebra provides a lot of context that can make these ideas easier to understand (for instance, you can think of this classification of matrices as being very closely analogous to the classification of finite abelian groups, since that is just the same result applied to the ring $\mathbb{Z}$ instead of $k[x]$). It also provides a framework to generalize these results to more difficult situations. For instance, if you want to consider a vector space together with two linear maps which commute with each other, that is now equivalent to a $k[x,y]$-module. There is not such a nice classification in this case, but the language of rings and modules lets you formulate and think about this question using the same tools as when you had just one linear map.


A vector space over a field $k$ is a similar construct to what's known as a module over a ring $R$. The idea is very similar - we want somewhere where we can add elements together, and multiply by elements from some other space. Only here, the other space is a ring. An example of this would be $k[t]$ (polynomials of arbitrary degree with coefficients in a field $k$).

As an example, there's a concept of a Smith Normal Form in Linear Algebra. The idea of this is if $A$ is a $m\times n$ matrix, then we can find invertible $m\times m,n\times n$ matrices $S,T$ such that $SAT$ is:

  1. Diagonal

  2. The diagonal elements of the diagonal matrix ($a_1,a_2,\dots$) satisfy $a_i\mid a_{i+1}$ for "small enough" $i$ (essentially, some of the $a_i$ might be zero, we want to ignore these).

Moreover, the diagonal elements are unique up to "multiplication by units". Over a field, this is rather boring, as all non-zero elements of a field are invertible (which is what it means to be a unit). But, the Smith Normal Form is true for many rings (any that are a PID), so we can do work over integer matrices and compute the smith normal form, and the diagonal elements are unique up to multiplication by $-1$ (the only non-identity unit in $\mathbb Z$). We could even do this for matrices with elements in $k[t]$! (although I'm not sure if it'd be useful).

This kind of idea tends to be true for plenty of things in Linear Algebra. You get taught a version for vector spaces, but often it's implicitly true over more general rings (as a field is just a special kind of ring).