Is square of Delta function defined somewhere?

When L. Schwartz "invented" distributions (actually, he only invented the mathematical theory as a part of functional analysis, because distributions were already used by physicists), he proved incidentally that it is impossible to define a product in such a way that distributions form an algebra with acceptable topological properties. What is possible is to define the product of distributions when their wave front sets do not meet. This is why $fT$ makes sense if $T$ is a distribution and $f$ is $C^\infty$, for instance, because the front set of $f$ is void. But you can also multiply that way genuine distributions; for instance in $\mathbb R^2$, $$(1)\qquad\delta_{x=0}=\delta_{x_1=0}\delta_{x_2=0}.$$

J.-F. Colombeau invented in the 70's an algebra of generalized functions, which has something to do with distributions. But each distribution has infinitely many representatives in the algebra, and you have to play with the equality and a "weak equality" (or "association"). I don't know of an example where this tool solved an open problem. In Colombeau's algebra, the square of $\delta_0$ makes sense, but is highly non unique.

Edit (May 2020). I'd like to share the following generalization of identity (1) above, which I found in developing my theory of Divergence-free positive symmetric tensor. In ${\mathbb R}^d$, consider the one-dimensional Lebesgue measure ${\cal L}_j$ along the $j$-th axis, for $1\le j\le d$. Then $$({\cal L}_1\cdots{\cal L}_d)^{\frac1{d-1}}=\delta_{x=0}.$$ There are a lot of reasons why this equality makes sense and is valid. For instance, if you approach ${\cal L}_j$ by $(2\epsilon)^{1-d}dx|_{K_j(\epsilon)}$ where $K_j(t)=(-\epsilon,\epsilon)^{d-1}\times {\mathbb R}\vec e_j$, then the left-hand side equals $(2\epsilon)^{-d}dx|_{(-\epsilon,\epsilon)^d}$, which approaches the Dirac at the origin. There is an analogous identity when the orthogonal axes are replaced by an arbitrary list of $d$ axes; then the right-hand sides is $C\delta$, where the constant $C$ is computed by solving a case of Minkowski's Problem.


$\delta_0$ vanishes identically on the space of test functions you've defined. So it's not surprising that its square is well-defined: $0\cdot 0 = 0$.

I suspect you'll have a much harder time defining $\delta_0^2$ on test functions which don't vanish at $0$.


There are whole theories in microlocal analysis that deal with the issues here, I believe. Some heuristics are that the "singular support" of a distribution controls what it can be multiplied by in a naive sense (distributions with a disjoint singular support). So squaring the delta function is the first bad case - whatever the singular support means, it must be the set containing 0 for the delta function. Need more heuristics.

One insight is that one dimension may be too few to show the real picture. "Microlocal" tends to mean localising in (co)tangential directions, and one dimension offers only two. Hyperfunctions in the case of one dimension make something of this by considering the real line as the boundary of the upper half complex plane. I.e up is not the same as down. Boundary values of functions holomorphic in the upper half plane have a candidate for the delta function analogue: take 1/z. No problem squaring that. More of a problem saying what this analogy means that is worth anything. Mikio Sato did that. Now I shall be quiet, because this is probably already wrong enough.