Axiomatic approach to polynomials?

Below is a sketch of one way to "axiomatize" the polynomial ring $\rm\,R[x]\:$ over a commutative ring $\rm\,R.$

  • $\rm\:R[x]\:$ is a commutative ring generated by $\rm\:R\:$ and $\rm x\not\in R$

  • $\rm\:f = g\:$ in $\rm\:R[x]\iff$ $\rm\:f = g\:$ is provable from the ring axioms (and equalities of $\rm\:R$)

Said informally, $\rm\:R[x]\:$ is the "freest" possible ring obtained by hypothesizing only that it is a ring containing $\rm\:R\:$ and $\rm\:x.\:$ In particular, $\rm\:R[x]\:$ contains no more elements than those that can be generated by iteratively applying ring operations to $\rm\:x\:$ and elements of $\rm\:R,\:$ and $\rm\:R[x]\:$ contains no more equalities other than those that can be deduced by the ring axioms (and equalities in $\rm\:R).\:$

Now, since $\rm\:R[x]\:$ satisfies all ring axioms, the usual proofs show that every element is equal to one in standard normal form $\rm\:r_0 + r_1\, x +\cdots+ r_n x^n.\:$ To show that no two distinct normal forms are equal one can use van der Waerden's trick (cf. 2nd-last post here, now appended below) which here amounts to using a regular representation.

Alternatively, more semantically, but less effectively, one can view the elements of $\rm\:R[x]\:$ as identities of $\rm\:R$-algebras, which immediately yields the universal mapping property of $\rm\:R[x].$

These are specializations of various techniques of constructing free algebras in equationally defined classes of algebras. One can find discussion of such in most treatises on universal algebra. For a particularly readable (and comprehensive) introduction see George Bergman's (freest!) textbook $ $ An Invitation to General Algebra and Universal Constructions. For convenience I excerpt below my linked Mathoverflow remarks on van der Waerden's trick, etc - which may be of interest to the OP and others. See also this answer on the motivation for formal vs. functional polynomials.


How do you interpret the indeterminate "$\rm x$" in ring theory from the set theory viewpoint? How do you write down $\rm\:R[x]\:$ as a set? Is it appropriate/correct to just say that

$$\rm R[x] = \{f:R\to R \mid \exists n\in\mathbb N,\ and\ c_i \in R\ \ such\ that\ \ f(x) = c_0+c_1 x+\cdots+c_nx^n\}$$ This appears to be a very analytic definition. Is there a better definition that highlights the algebraic aspect of the set of polynomials? $\ \ \ $ (quoted from a Mathoverflow question)

The OP does explicitly reveal some motivation, namely he seeks to understand how to construct $\rm\:R[x]\:$ set-theoretically and to better understand the algebraic conception of polynomial rings. Such issues are not only of interest to students. For example, Pete L. Clark's answer refers to his notes on commutative algebra - where he discusses such topics at much greater length than do most algebra textbooks. There, while discussing various constructions of $\rm\:R[x],\:$ he remarks:

However it is tedious to verify associativity. Over the years I have developed a slogan: if you are working hard to show that some binary operation is associative, you are missing part of a bigger picture. Unfortunately this is not such a great test case for the slogan: I do not know of a truly snappy conceptual proof of the associativity of multiplication in a polynomial ring.

-- Pete L. Clark, Commutative Algebra, Sec. 4.3, p. 38.

In fact there is a "bigger picture", including a construction that achieves what he seeks. Such topics are probably not well-known to those who have not studied universal algebra. But they certainly deserve to be better known due to the fact that they provide deeper conceptual and foundational insight. [...]

I should emphasize that my motivation differs from that in Pete's notes. My goal is not primarily to find a construction of $\rm\:R[x]\:$ that is simplest (e.g. with easily verified ring axioms). Rather, my motivation has further pedagogical aims. Namely, I desire a construction that is faithful to the intended application of $\rm\:R[x]\:$ as a universal/generic object (e.g. recall my universal proofs of determinant identities by canceling $\rm\:det(A)\:$ for generic $\rm A).\:$ With that goal in mind, one may motivate quite naturally the construction of $\rm\:R[x]\:$ as a quotient $\rm\:T/Q\:$ of the absolutely-free ring term algebra $\rm\:T = R\{x\}.\:$ For $\rm\:T/Q\:$ to satisfy the desired universal mapping property it is obvious what the congruence Q must be: it must identify two terms $\rm\:s(x), t(x)\:$ precisely when they are identical under all specializations into rings, i.e. when $\rm\:s(x) = t(x)\:$ is an identity of rings. So, e.g. $\rm\:mod\ Q\:$ we have $\rm\:1*x = x,\ \ x*(x+1) = x*x+x.\:$ In particular $\rm\:T/Q\:$ is a ring since it satisfies all (instances of) ring identities (esp. the ring axioms).

Next we show that the standard sum-of-monomials representation yields a normal form for elements of $\rm\:T/Q,\:$ i.e. every element of $\rm\:T/Q\:$ is uniquely represented by some such normalized polynomial of $\rm\:T.\:$ Existence is trivial: simply apply the ring axioms to reduce a representative to normal polynomial form. It is less trivial to prove uniqueness, i.e. that distinct normal forms represent distinct elements of $\rm\:T/Q.\:$ For this there is a common trick that often succeeds: exploit a convenient representation of the ring. Here a regular representation does the trick. This method is called the "van der Waerden trick" since he employed it in constructions of group coproducts (1948) and Clifford algebras (1966).

Notice that this development is pleasingly conceptual: $\rm\:R[x]\:$ is constructed quite naturally as the solution to a universal mapping problem - a problem which is motivated by the desire to be able to perform generic proofs, as in said proofs of determinant identities. Everything is well-motivated - nothing is pulled out of a hat.

The same construction of free algebras works much more generally, e.g. for any class of algebras that admit a first-order equational axiomatization. Although there are also a few other known methods to construct such free algebras, this method is the most natural pedagogically and constructively. Indeed, this is the way most computer algebra systems implement free algebras. The difficulty lies not so much in the construction of the free algebra but, rather, in inventing normal-form algorithms so that one may compute effectively in such free algebras. Although this is trivial for rings and groups, for other algebras it can be quite difficult - e.g. the free modular lattice on $5$ generators has undecidable word problem, i.e. no algorithm exists for deciding equality. Of course much work has been done trying to discover such normal form algorithms, e.g. google Knuth-Bendix completion, Bergman's diamond lemma. Ditto for algorithms for computing in quotients of free algebras, i.e. algebras presented by generators and relations, e.g. Grobner bases, Todd-Coxeter, etc.

It is worth emphasizing that Van der Waerden's trick comes in handy in many similar situations. For example, see Alberto Gioia's Master's thesis (under Hendrik Lenstra) $ $ Normal forms in combinatorial algebra, 2009. It is also discussed in George Bergman's classic paper[1] on the Diamond Lemma, and in his beautiful textbook [2] on universal algebra.

[1] Bergman, G. The diamond lemma for ring theory,
Advances in Mathematics 29 (1978) 178-218.
[2] Bergman, G. An Invitation to General Algebra and Universal Constructions.


Short answer: (commuting) polynomials with coefficients from a (commutative, associative, unital) ring R are just things that can be clearly evaluated in any ring containing R.

Some definitions

I'll describe the definition a few times, so you can see how things like this work.

In this answer all rings and algebras are commutative, associative, and unital.

(1) Basically, PolynomialRing(R) is an R-algebra T and a special element t of T, but T has an extra method called Evaluate(S,s) that takes another R-algebra S and an element s of S and produces the unique R-algebra homomorphism from T to S that sends t to s. The existence of the homomorphism is more "what you can do with T" but the uniqueness of the homomorphism is important to define the unique (up to R-algebra isomorphism) polynomial ring over R.

Similarly $\mathbb{R}$ is an ordered field, so we know what we can do with it: we can add, subtract, multiply, divide, and compare. We need "complete" to know we have the right field (and it gives us a few other neat operations like sup and inf, so that limits exist). The evaluation homomorphism tells us what we can do with the algebra, and the uniqueness of the evaluation homomorphism tells us we have the right algebra (and gives us a few other neat category theory like operations like adjoints, so that something like limits exist).

(2) Given a ring R the polynomial ring with coefficients R is an R-algebra T with a distinguished element t such that for every R-algebra S and element s of S there is a unique R-algebra homomorphism from T to S that sends t to s.

The homomorphism is called “evaluating the polynomial at $t=s$”.

If $(T,t)$ and $(X,x)$ are two such polynomial rings, then there is a unique R-algebra homomorphism from T to T that sends t to t (namely the identity), but there is also the composition of the homomorphisms from T to X and X to T sending t to x and x to t. Hence the composition is the identity, and both are isomorphisms.

(3) Given a ring R the polynomial ring with coefficients R in n indeterminates is an R-algebra with distinguished elements $t_1,t_2,\dots,t_n$ such that for every R-algebra S with element $s_1,s_2,\dots,s_n$ there is a unique R-algebra homomorphism from T to S sending $t_1,t_2,\dots,t_n$ to $s_1,s_2,\dots,s_n$ (in that order).

A sequence $s_1,s_2,\dots,s_n$ is a function from the numbers $1,2,\dots,n$ to S. One doesn't have to use numbers for variable identifiers. Any set X can work.

(4) Given a ring R and a set X, the polynomial ring with coefficients R and indeterminates X is an R-algebra T containing X as a subset such that for any R-algebra S and function from X to S, there is a unique R-algebra homomorphism from T to S agreeing with the function on X.

What are algebras?

In case you don't already like R-algebras or rings, let me describe them in simple terms.

Begin with the case that R is a field. An R-algebra is either of the following equivalent ideas (choose one that makes sense):

(1) An R-vector space S with a multiplication that is associative, commutative, and unital, and that works well with scalar multiplication: $(rv)(w) = r(vw) = v(rw)$ for r in R, and v, w in the vector space.

(2) A ring S that contains R as a subring (where the multiplicative identity of S is the same as the one in R).

An R-algebra homomorphism is just a function that preserves everything: $f(v+w) = f(v)+f(w)$, $f(rv) = r f(v)$, and $f(vw) = f(v) f(w)$. Notice that $f(r)=r$ if you take viewpoint 2.

Since R-algebra homomorphisms preserve addition and multiplication and coefficients, they allow us to evaluate any "polynomial" where by polynomial I mean a recipe for combining ring elements using addition, multiplication, and scalar multiplication, like "Take a ring element, square it, add 3, multiply by the original element, and add 5", better known as $x\mapsto (x^2+3)x+5 = x^3 + 3x + 5$. The right hand side is just a polynomial like we know, and R-algebra homomorphisms allow us to say that if x is replaced by a specific algebra element, then we know what the entire polynomial should be (just follow the recipe).

So polynomial rings just have to "exist" in some sense: they are just recipes for combining elements of algebras (recipes of adding and multiplying). The surprising thing is that they exist in a very simple sense: they are themselves R-algebras. The recipes themselves can be added and multiplied.

Proving that they exist as R-algebras requires some goofy things like sequences (just like proving the existence of a complete ordered field required goofiness like cauchy sequences or dedekind cuts), but working with them as recipes doesn't require anything like that at all.

Incidentally, the definition of algebra for a general commutative ring R is not much different: replace "vector space" with "module" and "S contains R as a subring" with "has a ring homomorphism from R into S".

CGT aside

On the off chance you like computational group theory: a common way of representing elements of "polynomial groups" (called free groups) is by "monomials" (groups only have multiplication, so no adding allowed) or "words" (like strings in CS). However, words can be a little limiting in practical computations, and there are other data-types used including "straight line programs" which are quite literally recipes for multiplying stuff together. "Take element # 1 and multiply it by element #2 placing the result in element #1. Take element # 1 and multiply it by element #1 and place the result in element # 1." or more briefly $[a,b] \mapsto [(ab)^2, b]$.

These recipes can often be stored in space logarithmic in the space required for general words. They can also speed up some calculations by recording a particular efficient "addition chain" to produce a list of group elements (multiplying group elements sometimes takes a few seconds per multiplication, so it is important not to waste them).

A lot of times one works with recipes as if they are just recipes, but it is occasionally important to know that the recipes themselves form a group and that there is a unique group homomorphism (evaluation) that takes the formal "element #1" to a specific element of a specific group.


Edit: As Bill Dubuque has pointed out the following is very similar to the standard approach reffered by the OP. Anyway the following definition differes from the said standard definition because it present the polynomial ring as a module with a basis with additional structure satisfying axioms.

Edit2: I've interpretated axiomatization as a presentation of a structure as family of sets and functions on these sets satisfying some axioms, i.e. presentation via (possible multi-sorted) first-order theories.

Here a possible axiomatization for polynomial ring in one variable. Let $R$ be a commutative unital ring, then a polynomial ring over $R$ is just a $R$-module, which we denote as $R[x]$, having a countable basis indexed by $\mathbb N$, with a $R$-bilinear map $b \colon R[x] \times R[x] \to R[x]$ such that for each $n,m \in \mathbb N$ if $x^n$ and $x^m$ are respectively the $n$-th and $m$-th elements of the basis then $b(x^n,x^m)=x^{n+m}$.

This definition easily extend to polynomial rings with $n$-variable.