# Rigorous underpinnings of infinitesimals in physics

When I asked my undergrad analytic mechanics professor "what does it mean for a rotation to be infinitesimal?" after he hand-wavily presented this topic in class, he answered "it means it's really small." At that point, I just walked away. Later that day I emailed my TA who set me straight by pointing me to a book on Lie theory.

Fortunately, I don't intend to write an answer like my professor's.

In general, whenever you see the term "infinitesimal BLANK" in physics, you can be relatively certain that this is merely a placeholder for "first order (aka linear) approximation to BLANK."

Let's look at one of the most important examples.

**Infinitesimal transformations.**

To be more rigorous about this, let's consider the special case of "infinitesimal transformations." If my general terminological prescription above is to be accurate, we have to demonstrate that we can make the concept of a "first order approximation to a transformation" rigorous, and indeed we can.

For concreteness, let's restrict the discussion to tranformations on normed vector spaces. Let an open interval $I=(a,b)$ containing $0$ be given, and suppose that $T_\epsilon$ is a transformation on some normed vector space $X$ such that $T_0(x)$ is the identity. Let $T_\epsilon$ depend smoothly on $\epsilon$, then we define the infinitesimal version $\widehat T$ of $T_\epsilon$ as follows. For each point $x\in X$, we have
$$
\widehat T_\epsilon(x) = x + \epsilon\frac{\partial}{\partial\epsilon}T_{\epsilon}(x)\bigg|_{\epsilon=0}
$$
The intuition here is that we can imagine expanding $T_\epsilon(x)$ as a power series in $\epsilon$;
$$
T_\epsilon(x) = x + \epsilon T_1(x) + \mathcal O(\epsilon^2)
$$
in which case the above expression for the infinitesimal version of $T_\epsilon$ gives
$$
\widehat {T}_\epsilon(x) = x+\epsilon T_1(x)
$$
so the transformation $\widehat T$ encodes the behavior of the transformation $T_\epsilon$ to first order in $\epsilon$. Physicists often call the transformation $T_1$ the **infinitesimal generator** of $T_\epsilon$.

**Example.** Infinitesimal rotations in 2D

Consider the following rotation of the 2D Euclidean plane: $$ T_\epsilon = \begin{pmatrix} \cos\epsilon& -\sin\epsilon\\ \sin\epsilon& \cos\epsilon\\ \end{pmatrix} $$ This transformation has all of the desired properties outlined above, and its infinitesimal version is $$ \widehat T_\epsilon = \begin{pmatrix} 1& 0\\ 0& 1\\ \end{pmatrix} + \begin{pmatrix} 0& -\epsilon\\ \epsilon& 0\\ \end{pmatrix} $$ If we act on a point in 2D with this infinitesimal transformation, then we get a good approximation to what the full rotation does for small values of $\epsilon$ because we have made a linear approximation. But independent of this statement, notice that the infinitesimal version of the transformation is rigorously defined.

**Relation to Lie groups and Lie algebras.**

Consider a Lie group $G$. This is essentially a group $G$ that can also be thought of as a smooth manifold in such a way that the group multiplication and inverse maps are also smooth. Each element of this group can be thought of as a transformation, and we can consider a smooth, one-parameter family of group elements $g_\epsilon$ with the property that $g_0 = \mathrm{id}$, the identity in the group. Then as above, we can define an infinitesimal version of this one-parameter family of transformations; $$ \widehat g_\epsilon = \mathrm{id} + \epsilon v $$ The coefficient $v$ of $\epsilon$ in this first order approximation is basically (this is exactly true for matrix Lie groups) an element of the Lie algebra of this Lie group. In other words, Lie algebra elements are infinitesimal generators of smooth, one-parameter families of Lie group elements that start at the identity of the group. For the rotation example above, the matrix $$ \begin{pmatrix} 0& -1\\ 1& 0\\ \end{pmatrix} $$ is therefore an element of the Lie algebra $\mathfrak{so}(2)$ of the Lie group $\mathrm{SO}(2)$ of rotations of the Euclidean plane. As it turns out, transformations associated with Lie groups are all over the place in physics (particularly in elementary particle physics and field theory), so studying these objects becomes very powerful.

**Invariance of a lagrangian.**

Suppose we have a Lagrangian $L(q,\dot q)$ defined on the space (tangent bundle of the configuration manfiold of a classical system) of generalized positions $q$ and velocities $\dot q$. Suppose further that we have a transformation $T_\epsilon$ defined on this space, then we say that the Lagrangian is **invariant** under this transformation provided
$$
L(T_\epsilon(q,\dot q)) = L(q, \dot q)
$$
The Lagrangian is said to be **infinitesimally invariant** under $T_\epsilon$ provided
$$
L(T_\epsilon(q,\dot q)) = L(q, \dot q) + \mathcal O(\epsilon^2)
$$
In other words, it is invariant to first order in $\epsilon$. As you can readily see, infinitesimal invariance is weaker than invariance.

Interestingly, *only infinitesimal invariance of the lagrangian is required for certain results (most notably Noether's theorem) to hold*. This is one reason why infinitesimal transformations, and therefore Lie groups and Lie algebras, are useful in physics.

**Application: Noether's theorem.**

Let a Lagrangian $L:\mathscr C\times\mathbb R$ be given where $\mathscr C$ is some sufficiently well-behaved space of paths on configuration space $Q$. Given a one-parameter family of transformations $T_\epsilon:\mathscr C\to\mathscr C$ starting at the identity. The first order change in the Lagrangian under this transformation is $$ \delta L(q,t) = \frac{\partial}{\partial\epsilon}L(T_\epsilon(q),t)\Big |_{\epsilon=0} $$ One (not the strongest) version of Noether's theorem says that if $L$ is local in $c$ and its first derivatives, namely if there is a function $\ell$ such that (in local coordinates on $Q$) $L(q,t) = \ell(q(t), \dot q(t), t)$ and if $$ \delta L(q,t) = 0 $$ for all $c\in\mathscr C$ that satisfy the equation of motion, namely if the Lagrangian exhibits infinitesimal invariance, then the quantity $$ G = \frac{\partial \ell}{\partial \dot q^i}\delta q^i, \qquad \delta q^i(t) = \frac{\partial}{\partial\epsilon}T_\epsilon(q)^i(t)\bigg|_{\epsilon=0} $$ is conserved along solutions to the equations of motion. The proof is a couple lines; just differentiate $G$ evaluated on a solution with respect to time and use the chain rule and the Euler-Lagrange equations to show it's zero.

-What is an infinitesimal quantity like $\delta$ to the physicist?

To most physicists, it means the same thing it meant to Newton, Leibniz, and Euler. It means something that's small enough that we can apply a certain informally defined body of techniques to it and get correct answers.

To physicists who know more about math after 1960, it means the same thing, except that they are aware that the body of techniques was eventually formally defined and proved to be consistent. There are in fact multiple ways of doing this, and for a physicist's purposes, it never matters which formalization is used. Some examples of formalizations are non-standard analysis and smooth infinitesimal analysis.

The important thing to understand here is that the results obtained by people like Euler were *right*. There is nothing wrong with the informal versions of the techniques.

-Why do physicists argue using infinitesimals rather than "standard" calculus?

Infinitesimals *were* the standard calculus for hundreds of years. The reason the subject was originally developed in terms of infinitesimals is because it's the most natural and comfortable way of reasoning about the subject. Often one finds that when a particular argument can be expressed either using infinitesimals or using epsilon-delta methods, the depth of quantifiers is lower by one in the former case.

-What is the physical meaning of infinitesimal transformation? How does it relate to Lie Algebras?

In the physical example you gave, it means what it says: an infinitesimal rotation. The Lie group of rotations is a continuous group connected to the identity. You can use infinitesimals as generators.

-Is there a rigorous theoretical apparatus for justifying the computations shown above?

Yes, in fact there's more than one, as explained above.