What does matrix representation and its linear operator have in common?
The following are common between a linear operator $T : V \to V$ and a matrix representation $M$ under a given a basis $\mathcal{B}$ for $V$:
- Eigenvalues
- Singular values
- Multiplicities (algebraic and geometric) of each eigenvalue
- Jordan Normal Forms
- Trace
- Determinant
- Characteristic polynomial
- Minimal polynomial
- Indeed, $\{p \in \Bbb{F}[x] : p(T) = 0\} = \{p \in \Bbb{F}[x] : p(M) = 0\}$
- Rank
- Nullity
- Invertibility
Even though not exactly a point of commonality, we also have the following correspondences:
- $v$ is a (generalised) eigenvector of $T$ if and only if $[v]_\mathcal{B}$ is a (generalised) eigenvector of $M$ (and all the obvious implications about (generalised) eigenspaces).
- The exponent of a generalised eigenvector $v$ with respect to $T$ is the same as $[v]_\mathcal{B}$ with respect to $M$.
- ($v_1, \ldots, v_n)$ is a Jordan basis for $T$ if and only if $([v_1]_\mathcal{B}, \ldots, [v_n]_\mathcal{B})$ is a Jordan basis for $M$.
- A vector $v$ belongs to the range of $T$ if and only if $[v]_\mathcal{B}$ belongs to the columnspace of $M$ (implying that their dimensions, i.e. ranks, are equal).
- The kernel of $T$ is the nullspace/kernel of $M$, expressed as coordinate column vectors with respect to $\mathcal{B}$ (they are eigenspaces corresponding to $0$ after all!).
There are differences too between a linear transformation $T$ and its matrix $M$.
They are really objects in different structures. The former can be completely described using only the underlying vector space $V$, the latter on the pair $(V,B)$ where $B$ is a particular basis of $V$.
Some properties are what we would call "basis-dependent", meaning that they change with respect to different bases. As an example, many (but not all) matrix norms are not really properties of the operators that generate these matrices, as different bases will generate matrices with a different norm.
They support different uses and call upon different intuitions. The abstract transformation has a more geometric feel. The matrix is more algebraic. It's what you use to calculate examples.
In some applications there is a natural basis that comes from the underlying problem domain. Then the matrix contains information you lose when you think only of the transformation. For example, consider the adjacency matrix of a graph.
You choose one representation over the other to suit a particular purpose. When you interpret matrices as linear transformations it takes one line to prove matrix multiplication is associative. The direct proof requires painful manipulation of indices.
Linear transformations make sense in more general contexts than matrices. The vector space $V$ need not be finite dimensional. It need not even be a vector space: a module over a ring will do. In these new contexts bases and matrices are harder to think about.
I think your question calls for a meta answer.
They have everything in common since they are different ways to describe the same thing. It's like asking what a parabola in the plane and its equation in the form $y = ax^2 + bx + c$ have in common.