What are the benefits of writing vector inner products as $\langle u, v\rangle$ as opposed to $u^T v$?
Mathematical notation in a given mathematical field $X$ is basically a correspondence $$ \mathrm{Notation}: \{ \hbox{well-formed expressions}\} \to \{ \hbox{abstract objects in } X \}$$ between mathematical expressions (or statements) on the written page (or blackboard, electronic document, etc.) and the mathematical objects (or concepts and ideas) in the heads of ourselves, our collaborators, and our audience. A good notation should make this correspondence $\mathrm{Notation}$ (and its inverse) as close to a (natural) isomorphism as possible. Thus, for instance, the following properties are desirable (though not mandatory):
- (Unambiguity) Every well-formed expression in the notation should have a unique mathematical interpretation in $X$. (Related to this, one should strive to minimize the possible confusion between an interpretation of an expression using the given notation $\mathrm{Notation}$, and the interpretation using a popular competing notation $\widetilde{\mathrm{Notation}}$.)
- (Expressiveness) Conversely, every mathematical concept or object in $X$ should be describable in at least one way using the notation.
- (Preservation of quality, I) Every "natural" concept in $X$ should be easily expressible using the notation.
- (Preservation of quality, II) Every "unnatural" concept in $X$ should be difficult to express using the notation. [In particular, it is possible for a notational system to be too expressive to be suitable for a given application domain.] Contrapositively, expressions that look clean and natural in the notation system ought to correspond to natural objects or concepts in $X$.
- (Error correction/detection) Typos in a well-formed expression should create an expression that is easily corrected (or at least detected) to recover the original intended meaning (or a small perturbation thereof).
- (Suggestiveness, I) Concepts that are "similar" in $X$ should have similar expressions in the notation, and conversely.
- (Suggestiveness, II) The calculus of formal manipulation in $\mathrm{Notation}$ should resemble the calculus of formal manipulation in other notational systems $\widetilde{\mathrm{Notation}}$ that mathematicians in $X$ are already familiar with.
- (Transformation) "Natural" transformation of mathematical concepts in $X$ (e.g., change of coordinates, or associativity of multiplication) should correspond to "natural" manipulation of their symbolic counterparts in the notation; similarly, application of standard results in $X$ should correspond to a clean and powerful calculus in the notational system. [In particularly good notation, the converse is also true: formal manipulation in the notation in a "natural" fashion can lead to discovering new ways to "naturally" transform the mathematical objects themselves.]
- etc.
To evaluate these sorts of qualities, one has to look at the entire field $X$ as a whole; the quality of notation cannot be evaluated in a purely pointwise fashion by inspecting the notation $\mathrm{Notation}^{-1}(C)$ used for a single mathematical concept $C$ in $X$. In particular, it is perfectly permissible to have many different notations $\mathrm{Notation}_1^{-1}(C), \mathrm{Notation}_2^{-1}(C), \dots$ for a single concept $C$, each designed for use in a different field $X_1, X_2, \dots$ of mathematics. (In some cases, such as with the metrics of quality in desiderata 1 and 7, it is not even enough to look at the entire notational system $\mathrm{Notation}$; one must also consider its relationship with the other notational systems $\widetilde{\mathrm{Notation}}$ that are currently in popular use in the mathematical community, in order to assess the suitability of use of that notational system.)
Returning to the specific example of expressing the concept $C$ of a scalar quantity $c$ being equal to the inner product of two vectors $u, v$ in a standard vector space ${\bf R}^n$, there are not just two notations commonly used to capture $C$, but in fact over a dozen (including several mentioned in other answers):
- Pedestrian notation: $c = \sum_{i=1}^n u_i v_i$ (or $c = u_1 v_1 + \dots + u_n v_n$).
- Euclidean notation: $c = u \cdot v$ (or $c = \vec{u} \cdot \vec{v}$ or $c = \mathbf{u} \cdot \mathbf{v}$).
- Hilbert space notation: $c = \langle u, v \rangle$ (or $c = (u,v)$).
- Riemannian geometry notation: $c = \eta(u,v)$, where $\eta$ is the Euclidean metric form (also $c = u \neg (\eta \cdot v)$ or $c = \iota_u (\eta \cdot v)$; one can also use $\eta(-,v)$ in place of $\eta \cdot v$. Alternative names for the Euclidean metric include $\delta$ and $g$).
- Musical notation: $c = u_\flat(v)$ (or $c = u^\flat(v)$).
- Matrix notation: $c = u^T v$ (or $c = \mathrm{tr}(vu^T)$ or $c = u^* v$ or $c = u^\dagger v$).
- Bra-ket notation: $c = \langle u| v\rangle$.
- Einstein notation, I (without matching superscript/subscript requirement): $c = u_i v_i$ (or $c=u^iv^i$, if vector components are denoted using superscripts).
- Einstein notation, II (with matching superscript/subscript requirement): $c = \eta_{ij} u^i v^j$.
- Einstein notation, III (with matching superscript/subscript requirement and also implicit raising and lowering operators): $c = u^i v_i$ (or $c = u_i v^i$ or $c = \eta_{ij} u^i v^j$).
- Penrose abstract index notation: $c = u^\alpha v_\alpha$ (or $c = u_\alpha v^\alpha$ or $c = \eta_{\alpha \beta} u^\alpha v^\beta$). [In the absence of derivatives this is nearly identical to Einstein notation III, but distinctions between the two notational systems become more apparent in the presence of covariant derivatives ($\nabla_\alpha$ in Penrose notation, or a combination of $\partial_i$ and Christoffel symbols in Einstein notation).]
- Hodge notation: $c = \mathrm{det}(u \wedge *v)$ (or $u \wedge *v = c \omega$, with $\omega$ the volume form). [Here we are implicitly interpreting $u,v$ as covectors rather than vectors.]
- Geometric algebra notation: $c = \frac{1}{2} \{u,v\}$, where $\{u,v\} := uv+vu$ is the anticommutator.
- Clifford algebra notation: $uv + vu = 2c1$.
- Measure theory notation: $c = \int_{\{1,\dots,n\}} u(i) v(i)\ d\#(i)$, where $d\#$ denotes counting measure.
- Probabilistic notation: $c = n {\mathbb E} u_{\bf i} v_{\bf i}$, where ${\bf i}$ is drawn uniformly at random from $\{1,\dots,n\}$.
- Trigonometric notation: $c = |u| |v| \cos \angle(u,v)$.
- Graphical notations such as Penrose graphical notation, which would use something like $\displaystyle c =\bigcap_{u\ \ v}$ to capture this relation.
- etc.
It is not a coincidence that there is a lot of overlap and similarity between all these notational systems; again, see desiderata 1 and 7.
Each of these notations is tailored to a different mathematical domain of application. For instance:
- Matrix notation would be suitable for situations in which many other matrix operations and expressions are in use (e.g., the rank one operators $vu^T$).
- Riemannian or abstract index notation would be suitable in situations in which linear or nonlinear changes of variable are frequently made.
- Hilbert space notation would be suitable if one intends to eventually generalize one's calculations to other Hilbert spaces, including infinite dimensional ones.
- Euclidean notation would be suitable in contexts in which other Euclidean operations (e.g., cross product) are also in frequent use.
- Einstein and Penrose abstract index notations are suitable in contexts in which higher rank tensors are heavily involved. Einstein I is more suited for Euclidean applications or other situations in which one does not need to make heavy use of covariant operations, otherwise Einstein III or Penrose is preferable (and the latter particularly desirable if covariant derivatives are involved). Einstein II is suitable for situations in which one wishes to make the dependence on the metric explicit.
- Clifford algebra notation is suitable when working over fields of arbitrary characteristic, in particular if one wishes to allow characteristic 2.
And so on and so forth. There is no unique "best" choice of notation to use for this concept; it depends on the intended context and application domain. For instance, matrix notation would be unsuitable if one does not want the reader to accidentally confuse the scalar product $u^T v$ with the rank one operator $vu^T$, Hilbert space notation would be unsuitable if one frequently wished to perform coordinatewise operations (e.g., Hadamard product) on the vectors and matrices/linear transformations used in the analysis, and so forth.
(See also Section 2 of Thurston's "Proof and progress in mathematics", in which the notion of derivative is deconstructed in a fashion somewhat similar to the way the notion of inner product is here.)
ADDED LATER: One should also distinguish between the "one-time costs" of a notation (e.g., the difficulty of learning the notation and avoiding standard pitfalls with that notation, or the amount of mathematical argument needed to verify that the notation is well-defined and compatible with other existing notations), with the "recurring costs" that are incurred with each use of the notation. The desiderata listed above are primarily concerned with lowering the "recurring costs", but the "one-time costs" are also a significant consideration if one is only using the mathematics from the given field $X$ on a casual basis rather than a full-time one. In particular, it can make sense to offer "simplified" notational systems to casual users of, say, linear algebra even if there are more "natural" notational systems (scoring more highly on the desiderata listed above) that become more desirable to switch to if one intends to use linear algebra heavily on a regular basis.
One huge advantage, to my mind, of the bracket notation is that it admits 'blanks'. So one can specify the notation for an inner product as $\langle \ , \ \rangle$, and given $\langle \ , \rangle : V \times V \rightarrow K$, one can define elements of the dual space $V^\star$ by $\langle u , - \rangle$ and $\langle -, v \rangle$. (In the complex case one of these is only conjugate linear.)
More subjective I know, but on notational grounds I far prefer to write $\langle Au, v \rangle = \langle u, A^\dagger v \rangle$ for the adjoint map than $(Au)^t v = u^t (A^tv)$. The former also emphasises that the construction is basis independent. It generalises far better to Hilbert spaces and other spaces with a non-degenerate bilinear form (not necessarily an inner product).
I'll also note that physicists, and more recently anyone working in quantum computing, have taken the 'bra-ket' formulation to the extreme, and use it to present quite intricate eigenvector calculations in a succinct way. For example, here is the Hadamard transform in bra-ket notation:
$$ \frac{| 0 \rangle + |1 \rangle}{\sqrt{2}} \langle 0 | + \frac{| 0 \rangle - |1\rangle}{\sqrt{2}} \langle 1 |. $$
To get the general Hadamard transform on $n$ qubits, just taken the $n$th tensor power: this is compatible with the various implicit identifications of vectors and elements of the dual space.
Finally, may I issue a plea for everyone to use $\langle u ,v \rangle$, with the LaTeX \langle
and \rangle
rather than the barbaric $<u,v>$.
Inner product is defined axiomatically, as a function from $V\times V\to k$, where $k$ is a field and $V$ is a $k$-vector space, satisfying the three well-known axioms. The usual notation is $(x,y)$. So when you want to say anything about an arbitrary inner product, you use this notation (or some similar one). $(x,y)=x^*y$ is just one example of an inner product on the space $\mathbb C^n$. There are other examples on the same space, $(x,y)=x^*Ay$ where $A$ is an arbitrary Hermitian positive definite matrix, and there are dot products on other vector spaces.