Symmetric polynoms are Hopf algebra ? What for one needs co-product ?

Yes, there exists a natural Hopf algebra structure on the ring of symmetric functions (i. e., symmetric "polynomials" in infinitely many indeterminates). It is not related to the additive group of $k^n$ (for a good reason: as you said, the obvious coalgebra structure on $k\left[x_1,x_2,...,x_n\right]$ coming from addition on the affine space does not survive restriction to the symmetric polynomials), and it does require infinitely many indeterminates.

1. The algebra $\mathbf{Symm}$

Let $k$ be a commutative ring. (It will serve as a base ring. You can safely assume $k$ to be $\mathbb C$ if you want, but if you know some representation theory, you can actually get surplus value from working with $k=\mathbb Z$ even if all you care about are representations of finite groups over $\mathbb C$.)

So what is $\mathbf{Symm}$ (or $Sym$, as it is also known)? First of all, it is a $k$-Hopf algebra which, as a $k$-algebra, is the algebra of "symmetric functions" in countably many commuting indeterminates $X_1$, $X_2$, $X_3$, .... Here, "function" does not mean an actual function, but instead it means a power series $p$ in the indeterminates $X_1$, $X_2$, $X_3$, ... such that for some $d\in \mathbb N$, every monomial of total degree $\geq d$ occurs in $p$ with coefficient $0$. (Another word for this meaning of "function" is "degree-bounded power series".) "Symmetric" means that any two monomials which only differ in the order of the exponents (but the multisets of their exponents are the same) must have the same coefficient. (I guess I should say that for me, a monomial doesn't include a coefficient.) This definition of $\mathbf{Symm}$ is easily seen to be equivalent to the definition as the inverse limit $\varprojlim\limits_{n \to\infty} k\left[X_1,X_2,...,X_n\right]^{S_n}$, where the mapping $k\left[X_1,X_2,...,X_n\right]^{S_n} \to k\left[X_1,X_2,...,X_m\right]^{S_m}$ (for $n\geq m$) takes a symmetric polynomial in $n$ variables and sets the last $n-m$ variables $X_{n+1},X_{n+2}, \ldots, X_m$ to zero. (We can also view the $k$-module $\mathbf{Symm}$ as a direct limit $\lim\limits_{n\to\infty} k\left[X_1,X_2,...,X_n\right]^{S_n}$, where the mapping $k\left[X_1,X_2,...,X_n\right]^{S_n} \to k\left[X_1,X_2,...,X_m\right]^{S_m}$ (for $n\leq m$) takes a symmetric polynomial in $n$ variables and clones its coefficients to get an $m$-variable symmetric polynomial (I hope this is understandable; anyway, this is not important for us); notice, however, that this direct limit is not a direct limit of rings (since these mappings are not ring homomorphisms).)

2. Applying symmetric functions to multisets

Since we call the elements of $\mathbf{Symm}$ "functions", let us explain what they can be evaluated at:

For any $p\in\mathbf{Symm}$, any $u\in\mathbb N$ and any $u$-tuple $\left(x_1,x_2,...,x_u\right)$ of elements of a commutative $k$-algebra, we can define the "value" of $p$ at $\left(x_1,x_2,...,x_u\right)$ (denoted by $p\left(x_1,x_2,...,x_u\right)$) to be the result of

1) removing all monomials which contain at least one of the variables $X_{u+1}$, $X_{u+2}$, $X_{u+3}$, ... from the power series $p$;

2) then applying the resulting power series $q$ to $X_1=x_1$, $X_2=x_2$, ..., $X_u=x_u$ (this makes sense because the power series $q$ is a polynomial, due to our definition of "function").

Due to the symmetry of $p$, this result actually doesn't depend on the order of $\left(x_1,x_2,...,x_u\right)$; thus, we can think of $p\left(x_1,x_2,...,x_u\right)$ as the value of $p$ at the multiset (rather than $u$-tuple) $\left(x_1,x_2,...,x_u\right)$.

3. The bialgebra $\mathbf{Symm}$

So we have a $k$-algebra $\mathbf{Symm}$. How do we make it a Hopf algebra? First, let me show the intuition behind the comultiplication. It should correspond to the union of multisets. So, if we have some $p\in\mathbf{Symm}$, and write $\Delta\left(p\right)=\sum\limits_{i \in I} q_i \otimes r_i$, then we should have

(1) $p\left(x_1,x_2,...,x_u,y_1,y_2,...,y_v\right) = \sum\limits_{i \in I} q_i\left(x_1,x_2,...,x_u\right) r_i\left(y_1,y_2,...,y_v\right)$ for any two multisets $\left(x_1,x_2,...,x_u\right)$ and $\left(y_1,y_2,...,y_v\right)$ of elements of any commutative $k$-algebra.`

The counity should mean applying to the empty multiset:

(2) $p\left(\ \right) = \varepsilon\left(p\right)$.

(Sorry, dear Java friends, $p\left(\ \right)$ doesn't mean $p$ here.)

How do we actually get such $\Delta$ and $\varepsilon$ ? The easy answer is: By the fundamental theorem on symmetric polynomials, $\mathbf{Symm}$ is generated as a $k$-algebra by the elementary symmetric polynomials

$e_1 = X_1 + X_2 + X_3 + X_4 + ... = \sum\limits_{i=1}^{\infty} X_i$;

$e_2 = X_1 X_2 + X_1 X_3 + ... + X_2 X_3 + X_2 X_4 + ... + X_3 X_4 + ... = \sum\limits_{1\leq i < j} X_i X_j$;

$e_3 = X_1 X_2 X_3 + X_1 X_2 X_4 + ... + X_1 X_3 X_4 + ... + X_2 X_3 X_4 + ... = \sum\limits_{1\leq i < j < k} X_i X_j X_k$;

...;

$e_j = \sum\limits_{1\leq i_1 < i_2 < ... < i_j} X_{i_1} X_{i_2} ... X_{i_j}$;

...,

and these $e_1$, $e_2$, $e_3$, ... are algebraically independent.

(This does not immediately follow from the fundamental theorem on symmetric polynomials, since the fundamental theorem is usually not formulated for infinitely many variables, but you can either apply the same argument (lexicographic induction) in the infinite-variables case, or use the direct-limit construction of $\mathbf{Symm}$ to conclude the infinite-variables case from the finite-variables one.)

Hence, in order to define a $k$-algebra homomorphism from $\mathbf{Symm}$ to another commutative $k$-algebra (be it $\mathbf{Symm}\otimes \mathbf{Symm}$ or $k$ or anything else), it is enough to specify its values at the $e_j$ for $j=1,2,3,...$, and we have total freedom in doing so so. Since we want $\mathbf{Symm}$ to be a $k$-bialgebra, we must define $\Delta$ and $\varepsilon$ as $k$-algebra homomorphisms; so let us define $\Delta$ by requiring that

$\Delta\left(e_j\right) = \sum\limits_{m=0}^j e_m \otimes e_{j-m}$ for all $j\geq 1$, where $e_0$ is defined to mean $1$,

and let us define $\varepsilon$ by requiring that

$\varepsilon\left(e_j\right) = 0$ for all $j\geq 1$.

(Actually, $\varepsilon$ just maps every $p\in\mathbf{Symm}$ to the constant term of $p$. But $\Delta$ isn't that easily described.)

To check that this matches our intuition above at least on the $e_j$ (i. e., that it satisfies (1) and (2) for $p=e_j$), we must show that every $j\geq 1$ satisfies

$e_j\left(x_1,x_2,...,x_u,y_1,y_2,...,y_v\right) = \sum\limits_{m=0}^j e_m\left(x_1,x_2,...,x_u\right) e_{j-m}\left(y_1,y_2,...,y_v\right)$ for any two multisets $\left(x_1,x_2,...,x_u\right)$ and $\left(y_1,y_2,...,y_v\right)$ of elements of any commutative $k$-algebra,`

and $e_j\left(\ \right) = 0$. These are very easy. It is more complicated to check (1) and (2) for general $p$.

4. The graded Hopf algebra $\mathbf{Symm}$

From here on, everything goes very fast: We have a $k$-bialgebra $\mathbf{Symm}$. It is graded (the $n$-th degree consists of homogeneous power series of degree $n$ in $\mathbf{Symm}$) and connected (the $0$-th degree is the ground ring $k$, embedded in $\mathbf{Symm}$ by constant power series), so it is a Hopf algebra. This is because every connected graded bialgebra is a Hopf algebra (the antipode is constructed by induction on the degree); for a more detailed proof of this, see, e. g., Corollary II.3.2 in Dominique Manchon's http://arxiv.org/abs/math/0408405 , which even proves this for connected filtered bialgebras).

For an overview of the properties of $\mathbf{Symm}$, see, e. g. Michiel Hazewinkel's http://arxiv.org/abs/0804.3888 Section 10 (errata: http://www.cip.ifi.lmu.de/~grinberg/algebra/typos1short.pdf ). The antipode, for example, switches elementary symmetric with complete symmetric functions (up to sign), leaving the power sum functions invariant (again, up to sign). See also his Section 18 about the relation of $\mathbf{Symm}$ to the representation theory of symmetric group.

Several "extensions" of $\mathbf{Symm}$ are now well-known: $\mathbf{QSymm}$ and $\mathbf{NSymm}$ are discusssed in Hazewinkel's http://arxiv.org/abs/0804.3888 , http://arxiv.org/abs/math/0410468 and http://arxiv.org/abs/math/0410470 . The "further-up" extensions are newer: The Loday-Ronco Hopf algebra of trees (see Marcelo Aguiar, http://www.math.tamu.edu/~maguiar/Loday.pdf ) and the Malvenuto-Reutenauer Hopf algebra of permutations (see Marcelo Aguiar, http://www.math.tamu.edu/~maguiar/MR.pdf ). These things are usually studied over $k=\mathbb Z$, but everything you do over $\mathbb Z$ trivially extends to all other $k$'s.


Here are two different definitions of the Hopf algebra structure. One needs to work in infinitely many variables as you indicate.

From the point of view of the representation theory of the symmetric group, the product in $\Lambda$ can be defined as $$ V \cdot W = \mathrm{Ind}_{S_n \times S_k}^{S_{n+k}}V \otimes W$$ for $V$ a representation of $S_n$ and $W$ a representation of $S_k$; this product is then extended bilinearly. The coproduct then has a natural dual definition: $$ \Delta(V) = \sum_{i+j = n} \mathrm{Res}^{S_n}_{S_i \times S_j} V, $$ where a representation of $S_i \times S_j$ defines an element in $\Lambda \otimes \Lambda$ in the natural way.

The connection between symmetric functions and representations of $S_n$ is as follows. The graded piece $\Lambda^n$ is isomorphic to the ring of virtual representations of $S_n$ via the so called characteristic map. A virtual representation $V$ is mapped to the symmetric function $$ \mathrm{ch}(V) = \frac 1 {n!} \sum_{\sigma \in S_n} \mathrm{Tr}\left(\sigma \mid V\right) \psi(\sigma) $$ where $$ \psi(\sigma) = \prod_{(i_1\cdots i_k) \text{ a cycle in } \sigma} p_k. $$ This is in fact an isometry relative to the usual inner product on symmetric functions, and the natural inner product on representations for which irreducible representations form an orthonormal basis. The representation associated to the Young diagram $\lambda$ corresponds to the Schur function $s_\lambda$, so equivalently $$\langle s_\lambda, s_\mu \rangle = \delta_{\lambda \mu}.$$

A more direct definition of the coproduct is in terms of power sums. Define a coproduct via $$ \Delta(p_{i_1}\cdots p_{i_n}) = \sum_{k=0}^n p_{i_1}\cdots p_{i_k} \otimes p_{i_{k+1}}\cdots p_{i_{n}}. $$ In particular the power sums $p_n$ are primitive elements for this coproduct and they span the module of primitive elements. The elementary and homogeneous symmetric functions are divided powers for this Hopf algebra structure.

The antipode is uniquely determined by the coproduct. You prove this by induction over degree: when you expand $\Delta(x)$, you find terms of lower degree (where the antipode is known) and two terms $x \otimes 1 + 1 \otimes x$, on which you can deduce how the antipode acts. This holds in any graded connected Hopf algebra.


I'm surprised nobody has mentioned the connection to the Littlewood-Richardson coefficients so far in response to "What is it useful for?". The coproduct $$ \Delta(h_k) = \sum_{i+j = k} h_i \otimes h_j $$ gives rise to the following formula $$ \Delta( s_\lambda ) = \sum_{\mu,\nu} c_{\mu,\nu}^{\lambda} s_\mu \otimes s_\nu $$ Here $c_{\mu,\nu}^{\lambda}$ is the Littlewood-Richardson coefficient and $s_\lambda$ is the Schur function of shape $\lambda$. Hopf algebra techniques have been used to derive "skew" Pieri rules recently in work of Lam, Lauve and Sottile.