Is there a geometric interpretation of the product integral?
I'm admittedly not familiar with product integrals, but I think the geometric interpretation route is in vain.
I'm trying to couch this in terms of units: if $f$ and $x$ are both measured in feet, the Riemann sum gives us what we'd hope for our interpretation--ft×(ft-ft)->ft^2
Moving to the product integral, I can't think of anything geometric where the exponent has units (especially when the end behavior of the exponents is to be infinitesimally small--taking NNNth roots). To me, exponents in geometry are related to areas/higher dimensional volumes--but in such formulas, the exponents are unit-free.
To quote the question (emphasis mine):
is there a geometrical (or measure-theoretical) interpretation of the product integral?
In particular, in what follows I will give a measure-theoretic interpretation of the product integral. (Specifically the type II product integral in Wikipedia's terminology.) Namely there is actually a fairly clear interpretation corresponding to the Lebesgue theory of integration. So you may or not also consider this "geometrical" to the extent that you consider Lebesgue's theory of integration to be "geometrical" (compared to, e.g., Riemann's).
Caveats aside, consider a given measure space $(X, \mathscr{F}, \mu)$, where $X$ is the ground set, $\mathscr{F}$ is the $\sigma$-algebra, and $\mu$ is the measure. Just as in the case of Lebesgue integration, the first place we should start is with simple functions.
Negative Numbers are Bad
Now while finite products of negative numbers pose no problems, it should be clear that we would want to avoid "infinite products" of negative numbers, or more specifically any attempt to define limits of sequences which involve negative numbers being multiplied infinitely many times. For me this is clear upon considering the sequence $(-1)^n$. This is arguably the simplest possible sequence involving negative number multiplication possible, yet already it is entirely undecidable whether any "limit as $n\to \infty$" should be $1$ or $-1$. Think also about the headache which ensues when trying to define arbitrary exponentiation for negative numbers, e.g. what should $(-7)^{\pi}$ be? The problem again is that we would have to consider the limits of sequences of products of negative numbers (e.g. $(-7)^{3141/1000}$, $(-7)^{3142/1000}$,$\dots$) and these sequences are "not continuous in $n$", e.g. they will continue to fluctuate about $0$ in a non-Cauchy manner, and any limits depend on the way we choose to approach $\pi$, whereas they should be invariant to the direction of approach, i.e. different subsequences would give different answers, whereas if a sequence is to have a well-defined limit, all subsequences should have the same limit.
Therefore, any simple functions we consider should have not have negative coefficients. Allowing coefficients which are zero seemingly shouldn't present any problem, since whenever a zero coefficient is included we know automatically that the resulting product should be zero. The issues involving zero are actually a little more subtle, but in order to try to authentically present a "naive" approach, we will overlook them at first, and examine the ensuing carnage later.
In short, I argue above that when trying to develop a "Lebesgue theory of product integrals", we should not consider arbitrary simple functions, but instead should restrict our attention to simple functions which are conical combinations of indicator functions (of measurable sets), i.e. non-negative coefficients only. So let's say that we have such a simple function, $f: X \to \mathbb{R}_{\ge 0}$,
$$f(x) = \sum_{k=1}^n a_k \mathbf{1}_{A_k}(x) \,, $$
where each $a_k \ge 0$, and each $A_k \in \mathscr{F}$ (i.e. each $A_k$ is measurable).
A Naive Approach (and many of the far too numerous reasons why it doesn't work)
Then the way to define the product integral which immediately suggests itself (or at least which suggested itself to me) is:
$$\prod f := \prod_{k=1}^n a_k \mu(A_k)\,. $$
If one of the $a_k = 0$, then there's no problem, this is just $0$, so seemingly we would just be done, at least after proving a bunch of results analogous to one proves for Lebesgue integrals.
That sounds like a lot of tedious work, however, so naturally we wonder if there is a way to automatically connect this theory to the theory of (additive) Lebesgue integrals. That way we can just apply all of those theorems (monotone convergence, Fatou's lemma, etc.) to this scenario without having to prove anything new, thereby saving a lot of time and effort.
If one of the $a_k = 0$, then the product is just $0$, so we really don't have to think too much, so let's restrict without loss of generality (it seems) to the condition that all $a_k > 0$. Then if we want to relate the above to Lebesgue integration, we can (seemingly) just take the logarithm, i.e. exploit the existence of an isomorphism between the groups $(\mathbb{R}_{>0}, \times)$ and $(\mathbb{R}, +)$. Now we have a finite sum for each simple function under consideration, whose form is similar to the definition of entropy:
$$\sum_{k=1}^n (\log a_k) \mu(A_k) \,. $$
Oh, OK, so this just corresponds to the Lebesgue integral of $(\log \circ f)$ (which can be any arbitrary simple function, since $\log$ maps $(0, \infty)$ to all of $\mathbb{R}$) with respect to $\mu$. Oh, but wait, don't we need to take the $\log$ of $\mu(A_k)$ too? Alright, so it would be:
$$ \sum_{k=1}^n (\log a_k) \log(\mu(A_k)) \,.$$
Well that isn't as elegant, but maybe we could still make it work by using the theory of signed measures, since now instead of a non-negative weight function $\mu: \mathscr{F} \to \mathbb{R}_{\ge 0} \cup \{\infty\}$, we have a weight function taking arbitrary extended real values, $(\log \circ \mu): \mathscr{F} \to \mathbb{R} \cup \{ \pm \infty \}$.
Except that none of this works out at all, for so many reasons that I will probably not be able to remember them all. First, the logarithm of each $a_k \mu(A_k)$ is the sum $\log(a_k) + \log(\mu(A_k))$, not the product $\log(a_k) \log(\mu(A_k))$, so that $(\log \circ \mu(A_k))$ isn't serving as a weight function here, allowing negative weights or otherwise.
But even if one thinks they might be able to brute force their way past that, one also needs to consider that even though $\mu$ is a measure, and $(\log \circ \mu)$ is in some sense an extension of $\mu$ to all extended reals (including negative ones), $\log \mu$ is nevertheless not a signed measure, for reasons which are amazingly even more basic than the failure of the Jordan/Hahn decomposition theorem to apply. (It doesn't apply in general since when $\mu$ is not a finite measure, $(\log \circ \mu)$ can assume both the value $-\infty$ and $\infty$, so that for some measurable sets it might be the case that $(\log \circ \mu)$ might "assign the value $\infty - \infty$, which is undefined and for good reason. Of course we can circumvent this when $\mu$ is a finite measure, but my point is that the problems go even deeper.)
Namely, $(\log \circ \mu)$ is not additive. Given $A_1, A_2 \in \mathscr{F}$, $A_1 \cap A_2 = \emptyset$, one has that: $$\log(A_1 \sqcup A_2) = \log(\mu(A_1 \sqcup A_2)) = \log(\mu(A_1) + \mu(A_2)) \not= \log(\mu(A_1)) + \log(\mu(A_2))\,. $$ So since $(\log \circ \mu)$ is not additive, it cannot be a signed measure. Also, $(\log \circ \mu)$ is not monotone in the sense required for signed measures. $(\log \circ \mu)(\emptyset) = \log(0) = -\infty$, moreover $(\log \circ \mu)$ assigns $-\infty$ to any $\mu$ measure zero set, so if $(\log \circ \mu)$ were monotone in the sense required for signed measures, one would have that it assigns $-\infty$ to every set, which it clearly does not in general. And even if it did assign $-\infty$ to every set, that would just mean that our definition of product integral was useless, because in particular it would imply that every product integral is zero.
In short, our current definition of product integral fails miserably in any attempt to relate it clearly to the already existing theory of Lebesgue integration. So if we were to accept this as our working definition of product integral, we would in the most optimistic case still have a lot of work ahead of us. However, the above definition is not just difficult and tedious, it is also flat-out nonsensical. The culprit is $0$, which we so blasély overlooked earlier.
Let's look more closely at sets with ($\mu$-)measure zero. Even if all of our $a_k > 0$, if one of the $A_k$ has measure zero, say for $k=k'$, then automatically our product integral as defined above is zero, since
$$ \prod f = \prod_{k=1}^n a_k \mu(A_k) = a_{k'} \mu(A_{k'}) \prod_{k\not=k'} a_k \mu(A_k) = a_{k'} (0) \prod_{k\not=k'} a_k \mu(A_k) = 0\,.$$
Maybe this might seem fine to some people, but it conflicts with the intuition that other people have, namely that the contribution from any measure-zero set should be "negligible". (This is the case for regular integrals.) The value $a_{k'}$ on $A_{k'}$ is of course being ignored, but the contribution made by $A_{k'}$ is nevertheless certainly not negligible, since it automatically makes the entire product zero, even if all of the other factors might have been positive. What we would want more ideally is for the value of $a_{k'}$ to be ignored in a way that doesn't affect the end result.
Let's assume momentarily that we are misled by our intuition from the Lebesgue measure on the real line (where most measure zero sets are pathological sets not measurable with respect to the Borel $\sigma$-algebra) and decide that we can probably work around measure zero sets somehow (even though $\emptyset$ is always a measure zero set and there is never any way to completely work around it). In particular, let's pretend that we could restrict our definition to only sets $A_k$ where $\mu(A_k) > 0$, i.e. conical combinations of indicator functions of $A_k$ where $\mu(A_k) > 0$,
$$f(x) = \sum_{k=1}^n a_k \mathbf{1}_{A_k}(x)\,, \quad a_k \ge 0\,, \quad \mu(A_k) > 0 \,, \quad \implies \quad \prod f := \prod_{k=1}^n a_k \mu(A_k) \,. $$
In particular, this (seemingly) guarantees us that the product integral is equal to $0$ only when one of the $a_k$'s is equal to $0$, which is the behavior we would expect from a finite weighted product. (As a preview of what's to come later, note that for all $n$ we expect $(\frac{1}{2})^n$ to be non-zero, but that its limit as $n \to \infty$ should be zero, even though all of the factors are positive. Hence the emphasis on finite.) Moreover, this also seems to avoid any indeterminacy due to the fact that for any $A_k$, $A_k = A_k \cup \emptyset$, and $\mathbf{1}_{\emptyset} \equiv 0$; we have simply excluded writing $f$ in a way that decomposes it to include the (superfluous) indicator of the empty set.
Yet even with all of these contrived conditions, problems still emerge. In particular, our product integral still isn't well-defined in general. Consider the case when $\cup_{k=1}^n A_k \subsetneq X$, i.e. the union of all of the $A_k$ is not the entire ground set $X$. If we are so unfortunate to have that $\mu(X \setminus \cup_{k=1}^n A_k) > 0$,
$$f(x) = \sum_{k=1}^n a_k \mathbf{1}_{A_k}(x) = \sum_{k=1}^n a_k \mathbf{1}_{A_k} + (0)\mathbf{1}_{X \setminus \cup_{k=1}^n A_k}(x) \,. $$
Not only is the arbitrarily imposed type of decomposition we've required non-unique, but the two different decompositions give different values for the product integral when all of the $a_k > 0$!
The Right Idea
So basically our problem is this: if one of the sets is "truly" weighted $0$, then of course we want the product integral to equal zero. I.e. if the set has both positive measure and we deliberately intended the coefficient of its indicator function to be $0$, then of course the product integral should be zero. But otherwise, if the set has measure zero, or if it has positive measure but we didn't want to pay attention to the behavior of $f$ there, the value of the product integral should be unaffected by it.
This is the key point -- the number that leaves things "unaffected" is different for multiplication compared to addition. In particular, if we want to use the insight that $(\mathbb{R}, +)$ and $(\mathbb{R}_{>0}, \times)$ are isomorphic as groups, we have to remember that any isomorphism identifies the identity element of one group with the identity element of another group. The additive identity is $0$, but the multiplicative identity is not $0$ (indeed, $0$ is not even in the set $\mathbb{R}_{> 0}$); instead, the multiplicative identity is $1$. So if we want certain factors in our definition of product integral for simple functions to not affect the final result, we need those factors to equal $1$, not $0$. Our current, naive definition, which tries to directly transpose the definition of Lebesgue integration, therefore does exactly the wrong thing, since it assigns those factors the value $0$.
Let's first focus on the case where we require all of the coefficients $a_k$ of indicator functions in our simple function to be strictly greater than $0$. Again, we want to force the contribution from any set of measure $zero$ to be negligible, regardless of the value of $a_k$, i.e. we want to force the value of the corresponding factor in the product integral to be $1$. We force a summand to be the additive identity by multiplying it by $0$, and likewise, since exponentiation is the operation following multiplication which in turn follows addition, we can force a factor to be the multiplicative identity by exponentiating it by $0$. Therefore the above consideration more or less forces our definition of product integral to be:
$$\prod f^{d\mu} := \prod_{k=1}^n a_k^{\mu(A_k)} \,.$$
For example, something stupid like factors of $a_k \exp(\mu(A_k))$ wouldn't work unless we forced $a_k = 1$ for any $A_k$ with $\mu(A_k) = 0$ which is unnatural, defeats the purpose of the above. In any case it would also be bad practically speaking because its logarithm corresponds to summands of the form $\log(a_k) + \mu(A_k)$ whereas $\log(a_k) \mu(A_k)$ is what we actually want in order to be able to relate our theory to the previously existing theory of Lebesgue integrals.
Then if we include the cases where $a_k$ possibly equals $0$, this still isn't bad, as long as we choose to use the convention that $0^0 = 1$. This still forces the contribution from sets of ($\mu$-)measure zero to be negligible, in particular the empty set, while still allowing for product integrals which are genuinely $0$. (This convention is kind of analogous to setting $0 \cdot (-\infty) = 0$.)
Moreover, we also get from this definition two additional, and possibly unexpected, benefits:
This definition is "compatible" with the (additive) Lebesgue integral. Specifically, at least when all of the $a_k > 0$, we have that:
$$\prod f^{d\mu} = \prod_{k=1}^n a_k^{\mu(A_k)} \iff \log\left( \prod f^{d\mu} \right) = \sum_{k=1}^n \log(a_k) \mu(A_k) = \int \log f d\mu \,. $$
Moreover, it also seems clear (I haven't admittedly written out the formal proofs myself) that correspondingly $\prod f^{d \mu} = 0 \iff \int \log f d\mu = - \infty$, $\prod f^{d\mu} = \infty \iff \int \log f d\mu = \infty$, and $\prod f^{d \mu}$ is undefined if and only if $\int \log f d\mu$ is undefined, so that we get a(n almost) perfect correspondence between $\prod f^{d\mu}$ and $\exp(\int \log f d\mu)$ for simple functions. This correspondence should also carry over more generally to all $\mathscr{F}$-measurable functions because $\log$ and $\exp$ are continuous functions and continuous functions commute with limits. Therefore this theory can be reduced entirely to the theory of Lebesgue integration.
As a consequence of the above, we have that the product integral must be well-defined, i.e. its value is the same regardless of how we write the simple function $f$, since we know from the theory of (regular) Lebesgue integration that the value of $\int \log f d\mu$ (and thus also of $\exp (\int \log f d\mu))$ is the same regardless of how we write the simple function.
In other words, the measure-theoretical interpretation of the product integral is very similar to the theory of Lebesgue integration, provided that we restrict to simple functions which are conical combinations of indicator functions of measurable sets, and that we assign the coefficient $1$, and not the coefficient $0$, to any indicator function whose contribution we want to be negligible.
This wasn't as clear/lucid of an ending as I wanted, but as you can probably guess this wound up being much longer than I originally expected/anticipated.