"Velvet way" to Grassmann numbers

I don't have an answer to the question "why would one want to consider such crazy stuff in physics?" since I don't know much physics, but as a mathematics student I do have an answer to the question "why would one want to consider such crazy stuff in mathematics?"

What physicists call Grassmann numbers are what mathematicians call elements of the exterior algebra $\Lambda(V)$ over a vector space $V$. The exterior algebra naturally arises as the solution to the following geometric problem. Say that $V$ has dimension $n$ and let $v_1, ... v_n$ be a basis of it. We would like a nice natural definition of the $n$-dimensional volume of the paralleletope defined by the vectors $\epsilon_1 v_1 + ... + \epsilon_n v_n, e_i \in \{ 0, 1 \}$. When $n = 2$ this is the standard parallelogram defined by two linearly independent vectors, and when $n = 3$ this is the standard paralellepiped defined by three linearly independent vectors.

The thing about the naive definition of volume is that it is very close to having really nice mathematical properties: it is almost multilinear. That is, if we denote the volume we're looking at by $\text{Vol}(v_1, ... v_n)$, then it is almost true that $\text{Vol}(v_1, ... v_i + cw, ... v_n) = \text{Vol}(v_1, ... v_n) + c \text{Vol}(v_1, ... v_{i-1}, w, v_{i+1}, ... v_n)$. You can draw nice diagrams to see this readily. However, it isn't actually completely multilinear: depending on how you vary $w$ you will find that sometimes the volume shrinks to zero and then goes back up in a non-smooth way when really it ought to keep getting more negative. (You can see this even in two dimensions, by varying one of the vectors until it goes past the other.)

To fix that, we need to look instead at oriented volume, which can be negative, but which has the enormous advantage of being completely multilinear and smooth. The other major property it satisfies is that if any of the two vectors $v_i$ agree (that is, the vectors are linearly dependent) then the oriented volume is zero, which makes sense. It turns out (and this is a nice exercise) that this is equivalent to oriented volume coming from a "product" operation, the exterior product, which is anticommutative. Formally, these two conditions define an element of the top exterior power $\Lambda^n(V)$ defined by the exterior product $v_1 \wedge v_2 ... \wedge v_n$, and choosing an element of this top exterior power (a volume form) allows us to associate an actual number to an $n$-tuple of vectors which we can call its oriented volume in the more naive sense. If $V$ is equipped with an inner product, then there are two distinguished elements of $\Lambda^n(V)$ given by a wedge product of an orthonormal basis in some order, and it's natural to pick one of these as a volume form.

Alright, so what about the rest of the exterior powers $\Lambda^p(V)$ that make up the exterior algebra? The point of these is that if $v_1, ... v_p, p < n$ is a tuple of vectors in $V$, we can consider the subspace they span and talk about the $p$-dimensional oriented volume of the paralleletope given by the $v_i$ in this subspace. But the result of this computation shouldn't just be a number: we need a way to do this that keeps track of what subspace we're in. It turns out that mathematically the most natural way to do this is to keep in mind the requirements we really want out of this computation (multilinearity and the fact that if the $v_i$ are not linearly independent then the answer should be zero), and then just define the result of the computation to be the universal thing that we get by imposing these requirements and nothing else, and this is nothing more than the exterior power $\Lambda^p(V)$.

This discussion hopefully motivated for you why the exterior algebra is a natural object from the perspective of geometry. Since Einstein, physicists have been aware that geometry has a lot to say about physics, so hopefully the concept makes a little more sense now.


Let me also say something about how modern mathematicians think about "space" in the abstract sense. The inspiration for the modern point of view actually derives at least partially from physics: the only thing you can really know about a space are observables defined on it. In classical physics, observables form a commutative ring, so one might say roughly speaking that the study of commutative rings is the study of "classical spaces." In mathematics this study, in the abstract, is called algebraic geometry. It is a very sophisticated theory that encompasses classical algebraic geometry, arithmetic geometry, and much more, and it is in large part because of the success of this theory and related commutative ring approaches to geometry (topological spaces, manifolds, measure spaces) that mathematicians have gotten used to the slogan that "commutative rings are rings of observables on some space."

Of course, quantum mechanics tells us that the actual universe around us doesn't work this way. The observables we care about don't commute, and this is a big issue. So mathematically what is needed is a way to think about noncommutative rings as "quantum spaces" in some sense. This subject is very broad, but roughly it goes by the name of noncommutative geometry. The idea is simple: if we want to take quantum mechanics completely seriously, our spaces shouldn't have "points" at all because points are classical phenomena that implicitly require a commutative ring of observables, which we know is not what we actually have. So our spaces should be more complicated things coming from noncommutative rings in some way.

Grassmann numbers satisfy one of the most tractable forms of noncommutativity (actually they are commutative if one alters the definition of "commutative" very slightly, but never mind that...), and even better it is a form of noncommutativity that is clearly related to something physicists care about (the properties of fermions), so anticommuting observables are a natural step up from commuting observables in order to get our mathematics to align more closely with reality while still being able to think in an approximately classical way.


One of your questions was whether there exists a use of the Grassmann number apparatus in statistical physics. The answer is yes. This is not surprising since there are many deep connections between QFT and statistical physics (even classical one). For example, the Grassman technique can be used as a tool for calculating the partition function of the dimer model - this model originates from the following question: if diatomic molecules are adsorbed on a surface, forming a single layer, how many ways can they be arranged, what is the entropy of the system? For a work on this, see: "Grassmann Variables and Exact Solutions for Two-Dimensional Dimer Models" ( http://arxiv.org/abs/cond-mat/9711156 ). One can also use Grassmann techniques in the field of classical spin systems, a review of this can be found in "Grassmann techniques applied to classical spin systems " ( http://arxiv.org/abs/0905.1104 ).


Nope.

There can't be any role for the Grassmann numbers outside quantum physics. The reason is simple: they can't take any "values" from a particular "set". Instead, they only become analogous to the regular commuting numbers once we use them as intermediate steps and we integrate or differentiate over them or set them equal to zero. Indeed, the integrals are algebraically analogous to the integral over commuting variables. However, the overall sign in the presence of the Grassmann numbers is always a bit undetermined - and one must square the result to get rid of the ambiguity. That's why the Berezin (Grassmann) integrals must always be interpreted as probability amplitudes and quantum mechanics is needed.

One may say that a commuting number may be a product of two (or another even number of) Grassmann numbers - so a commuting number has an even number of previously unknown building blocks. But the measurable number - the number that Nature offers as a part of Her "user interface" to the consumers i.e. observers - is always commuting. However, at the fundamental level, in the core of Her inner workings, the Grassmann numbers are as natural as the commuting numbers.

The Grassmann numbers are the $c$-numbers for fermionic creation or annihilation operators in the same way as the ordinary commuting $c$-numbers are the numbers for the ordinary bosonic creation or annihilation operators (and many other operators). That's the closest analogy. It's natural for two operators to "almost commute" with one another or "almost anticommute" with one another. It must be one of the two options because if one repeats the interchange twice, we must get back to the same object, so the factor we pick by interchanging them must satisfy $S^2=+1$, leaving $S=\pm 1$ as the only possibilities.

A person must first understand why the Fermi-Dirac statistics is as natural as the Bose-Einstein statistics; then he can understand that at a deeper level, the Grassmann numbers are exactly as natural as the commuting numbers. However, everyone must also be ready that he doesn't have any "material experience" with the Grassmann numbers because they can't play any role in the classical limit - they can't appear in the "user interface" of Nature.