Transpose of a linear mapping

  1. Given a bilinear map $H\colon V\times V\to F$, a basis $[v_1,\ldots,v_n]$ of $V$ would be orthonormal relative to $H$ if $H(v_i,v_j) = 0$ if $i\neq j$ and $H(v_i,v_i) = 1$ for each $i$.

    The transpose of the transformation is defined in terms of the bilinear forms. Change the forms, the "transpose" may change. This is just like, if you change the inner product of a vector space, then whether a projection is an "orthogonal projection" or not may change as well.

  2. This is really the definition of the dual transformation; it holds for any vector spaces (both finite and infinite dimensional). In the infinite dimensional case, you have no hope of relating it to the previous definition, because $W^*$ is not isomorphic (not even non-canonically) with $W$, nor $V^*$ with $V$. In the finite dimensional case, you can define a bilinear form specifically so that the given bases of $V$ and $W$ are orthonormal, and then identify $W^*$ with $W$ by identifying the basis with the dual basis in the obvious way, and similarly for $V^*$ and $V$. Then the transpose defined here coincides with the transpose defined in 1.

  3. Given a inner product vector spaces $V$ and $W$, with inner products $\langle-,-\rangle_V$ and $\langle -,-\rangle_W$ respectively, the adjoint of a linear transformation $T\colon V\to W$ is a function $T^*\colon W\to V$ such that for all $v\in V$ and $w\in W$, $$\langle T(v),w\rangle_W = \langle v,T^*(w)\rangle_V.$$ It is not hard to show that if the adjoint exists, then it is unique and linear; and that if $V$ and $W$ are finite dimensional, then the adjoint always exists. If $\beta$ and $\gamma$ are orthonormal bases for $V$ and $W$, then it turns out that $[T^*]_{\gamma}^{\beta} = ([T]_{\beta}^{\gamma})^*$, where $A^*$ is the conjugate transpose of matrix $A$. If the vector spaces are real vector spaces, then the matrix of the adjoint is just the transpose.

    It is a theorem that $$\mathrm{ker}(T) = (\mathrm{Im}(T^*))^{\perp}.$$ If the spaces are finite dimensional, then you also have $$\mathrm{Im}(T^*)=(\mathrm{ker}(T))^{\perp}.$$ If you consider the matrices relative to orthonormal bases over the reals, this translates to the equations you have for matrices.

    This theorem uses the first definition, with the bilinear forms being the inner products of the spaces, assuming they are real vector spaces (as opposed to complex ones), and the bases used are orthonormal bases.