Product of two Gaussian PDFs is a Gaussian PDF, but Product of two Gaussian Variables is not Gaussian

The product of the PDFs of two random variables $X$ and $Y$ will give the joint distribution of the vector-valued random variable $(X,Y)$ in the case that $X$ and $Y$ are independent. Therefore, if $X$ and $Y$ are normally distributed independent random variables, the product of their PDFs is bivariate normal with zero correlation.

On the other hand, even in the case that $X$ and $Y$ are IID standard normal random variables, their product is not itself normal, as the links you provide show. The product of $X$ and $Y$ is a scalar-valued random variable, not a vector-valued one as in the above case.


1.) The first example is already sufficient. Just to throw in another one for a sum of Gaussian variables, consider diffusion: at each step in time a particle is perturbed by a random, Gaussian-distributed step in space. At each time the distribution of its possible positions in space will be a Gaussian because the total displacement is the sum of a bunch of Gaussian-distributed displacements, and the sum of Gaussian variables is Gaussian.

2.) The second situation (product of Gaussian PDFs) is confusing because the resulting function is a Gaussian, but it is not a probability distribution because its not normalized! Nevertheless, there are physical situations in which the product of two Gaussian PDFs is useful. See below.

TL;DR - a physical example for a product of Gaussian PDFs comes from Bayesian probability. If our prior knowledge of a value is Gaussian, and we take a measurement which is corrupted by Gaussian noise, then the posterior distribution, which is proportional to the prior and the measurement distributions, is also Gaussian.

For example:

Suppose you are trying to measure a constant, unknown, value $X$. You can take measurements of it, with Gaussian noise, your measurement model is $\tilde{X} = X + \epsilon$. Finally, suppose you have a Gaussian prior distribution for $X$. Then, the posterior distribution after taking a measurement is

$$P[X\mid \tilde{X}] = \frac{P[\tilde{X}\mid X] P[X]}{P[\tilde{X}]}$$

As is fashionable in beysian probability, we throw out the value $P[\tilde{X}]$, because it doesn't depend explicitly on $X$, so we can ignore it for now and normalize later.

Now, our assumption is that the prior, $P[X]$, is Gaussian. The measurement model tells us that $P[\tilde{X}\mid X]$ is Gaussian, in particular $P[\tilde{X}\mid X] = N[\Sigma_{\epsilon},X]$. Since the product of two Gaussians is a Gaussian, the posterior probability is Gaussian. It is not normalized, but that is where $P[\tilde{X}]$ (which we "threw out" earlier) comes in. It must be exactly the right value to normalize this distribution, which we can now read off from the variance of the Gaussian posterior.

What you should really take away from this is that Gaussians are magical [1]. I don't know of any other PDF which has this property. This is why, for example, Kalman filters work so darn well. Kalman filters utilize both of these properties, and that is how you get a super-efficient algorithm for state estimation for a linear dynamical system with Gaussian noise.

[1] - Gaussians are not actually magical, but perhaps they are mathemagical.


In brief, you were confused by two totally different concepts.

  1. For the first, you are calculating the distribution of transformed random variables. here, you specify that as the product of XY.

  2. For the second, you just calculate the product of two functions $\phi(x)\phi(y)$, which happen to be the PDF of two normal random variables.

For some details, check here. In general, if you want to calculate the PDF of XY, you need to figure out $F(t) = P(XY < t)$, then $f(t) = F'(t)$. As for the product of two functions, that's easy as you can see.