Product of all numbers in a given interval $[n,m]$
Definition of the interval
One possible definition given a strictly positive interval $[n,m]\subseteq\mathbb R^+$ could be: $$ \prod_{x\in [n,m]}x:=\exp\left(\int_n^m\ln(x)\ dx\right) =\frac{m^m\cdot n^{-n}} {\operatorname e^{m-n}} $$ for an interval $[n,m]$ containing uncountably many elements. The countable and finite versions could then read $$ \prod_{k=1}^{\infty} x_k:=\exp\left(\sum_{k=1}^{\infty}\ln(x_k)\right) \quad\text{and}\quad \prod_{k=1}^n x_k:=\exp\left(\sum_{k=1}^n\ln(x_k)\right)=x_1\cdot x_2\cdot ...\cdot x_n $$
We could even extend this definition to $$ \prod_{x\in [n,m]}f(x):=\exp\left(\int_n^m\ln(f(x))\ dx\right) $$ One nice property is that with this defition we have $$ \prod_{x\in[n,m]} x^a=\left(\prod_{x\in[n,m]} x\right)^a $$ so it appears to follow some nice rules of powers of conventional finite products.
To define it without direct use of integration, my definition should be equivalent to the defining: $$ S_k:=\prod_{x=0}^{2^k}\left(\frac{(2^k-x)n+xm}{2^k}\right)^{(m-n)/2^k} $$ and recognize the product $\prod_{x\in [n,m]}x$ as the limit of those $S_k$'s as $k$ tends to infinity. So it is like multiplying together $2^k+1$ evenly spread out factors over the interval $[n,m]$, but adjusting the exponent of each factor to match the distance between the factors, namely $(m-n)/2^k$. These exponents tend to zero as the number of factors tends to infinity. It can then be shown that $S_k\to m^m\cdot n^{-n}/\operatorname e^{m-n}$.
So how does this relate to your suggested symmetrical expression? Well, you are considering the sequence of the form: $$ T_k:=(S_k)^{2^k/(m-n)}=\prod_{x=0}^{2^k}\frac{(2^k-x)n+xm}{2^k} $$ Now clearly, if $S_k\to a>1$ we will have $T_k\to\infty$ whereas for $S_k\to a\in[0,1)$ we must have $T_k\to0$. So the difficult cases are $S_k\to 1$ or $S_k\to a<0$. The latter I doubt we can make any sense of.
Resolving when $S_k$ tends to $1$
If $S_k$ tends to $1$ for some $n\in(0,1)$ and an appropriate matching $m>1$ we then know that $$ \int_n^m\ln(x)\ dx=\lim_{k\to\infty}\ln(S_k)=\ln(1)=0 $$ Considering the graph of $\ln(x)$ it can be shown that $$ \left[\ln m -q_k\cdot(\ln m -\ln n)\right]\cdot\Delta x\leq\ln(S_k)\leq\left[\ln n +(1-q_k)\cdot(\ln m -\ln n)\right]\cdot\Delta x $$ where $0.5\leq q_k\leq 1$ is a sequence tending to $0.5$, and $\Delta x$ is short hand for the distance $(m-n)/2^k$. Since $T_k$ is equal to $S_k$ except for raising to the reciprocal of $\Delta x$ we get $\ln(T_k)=\ln(S_k)/\Delta x$ and therefore $$ \left[\ln m-q_k\cdot(\ln m -\ln n)\right]\leq\ln(T_k)\leq\left[\ln n +(1-q_k)\cdot(\ln m -\ln n)\right] $$ As $k$ tends to infinity both bounds tend to $(\ln m+\ln n )/2$ showing us that $$ \lim_{k\to\infty} T_k=\exp\left(\frac{\ln m+\ln n}2\right)=\sqrt{n\cdot m} $$
Now we should able to do the following:
Resolving the computation method
Given $n\in(0,1)$ one can solve $$ \int_n^m\ln(x)\ dx=m(\ln m-1)-n(\ln n-1)=0 $$ for $m>1$ in order to find the corresponding $m$ so that $T_k$ converges. One way to do this is to use the Newton-Raphson method on the function $$ f(x)=x(\ln x-1)-n(\ln n-1) $$ with initial guess $x_0=2$. Then $m=\lim_{k\to\infty}x_k$ where the $x_k$'s are defined recursively as $$ x_{k+1}:=x_k-\frac{x_k(\ln x_k-1)-n(\ln n-1)}{\ln x_k} $$ It turns out that in fact $1<m<\operatorname e\approx 2.7182818$. For $n=0.5$ it takes only a few iterations before one has $$ m\approx x_5=1.603016489916967074791... $$ and it can be verified that the digits listed above do not change for future iterations so we already have $m$ to a very high precision. So we have $$ \lim_{k\to\infty}\prod ^{2^k} _{x=0} \frac{(2^k - x)0.5+x\cdot 1.603016489916967074791...}{2^k}\\=\sqrt{0.5\cdot 1.603016489916967074791...}\approx 0.8952699285458456285 $$ But this is a very unstable result anyway! My earlier computations showed that if $m$ is either the slightest bit larger than the actual solution to $f(x)=0$ the product tends to infinity, or if it is the slightest bit less then it tends to zero.
Some general remarks to conclude
The infinite symmetrical product you defined is very unstable in more than one respect. If we change the definition even slightly we may get an entirely different result: $$ \prod ^{2^k} _{x=1} \frac{(2^k - x)n+xm}{2^k} $$ removes the first factor $n$ whereas $$ \prod ^{2^k-1} _{x=0} \frac{(2^k - x)n+xm}{2^k} $$ removes the last factor $m$. Whereas these would both lead to the same values of $S_k$ removing only a negligible contribution at that level, they affect the value of $T_k$ by a non-negligible factor. Also if we distributed the factors in the interval $[n,m]$ slightly different the product $T_k$ could change a lot whereas $S_k$ would not. So overall $S_k$ is a much more stable value. And $S_k$ tends to represent an actually uncountable product, whereas $T_k$ tends to something I would characterize as product of a countable subset of the factors in question. This might be another reason it is so unstable - there are infinitely many other ways we could have defined $T_k$ that would yield totally different results whereas all definitions of $S_k$ by partitioning $[n,m]$ into subintervals that decrease toward zero width as $k$ tends to infinity would all point to the same limit value of $S_k$. This in a sense addresses your question 1 as an infinite product of the kind you are suggesting would or would not tend to zero depending on the distribution you use to select factors from $[n,m]$.
There is no such thing as a product of uncountably many numbers (where not most of them are $=1$ or some are $=0$). Compare to sums: Even with countably many summnds, we do not speak of sums, but of series (even though we suggestively use the same symbol $\sum$ for both). Those have very different properties from sums: A sum of rationals is always defined, is always a rational, and does not depend on the order of summation. On the other hand, a series of rationals may fail to converge, or converge to an irrational number, or converge to different values if we change the order of the terms. So to repeat: Even a "sum" of countably many numbers is not really a sum. A "sum" of uncountably many (non-zero) summands is even more horrible: For any such best there must exist some $\epsilon>0$ such that uncountably many terms are $>\epsilon$, or uncountzably many terms are $<-\epsilon$; already their contribution is (positive or negative) infinite. It is not an easy task, especially for an arbitrary index set, to assign any meaning to this. The same argument holds for products (if not with some extra considerations).
That being said, you gave a specific definition of an expression $$P(a,b)=\lim_{n\to\infty}P(a,b;2^n),$$ where $$P(a,b;N)=\prod_{j=0}^N\frac{(N-j)a+jb}{N}, $$ which we shall investigate (and better forget that we want to call this "product of all numbers in $[a,b]$). Consider first the case $0<a<b$. $$\frac1N\ln P(a,b;N) = \sum_{j=0}^N\frac1N\ln \frac{(N-j)a+jb}{N},$$ which looks a lot like a Riemann sum. Indeed, as $\ln$ is strictly increasing, we see that $$ \frac1N\ln P(a,b;N)=\int_a^b\ln x\,\mathrm dx+R_n$$ where $\frac 1N\ln a<R_n<\frac 1N\ln b$. As $x(\ln x-1)$ is an anti-derivative fo the logarithm, we arrive at $$ P(a,b;N) = \bigl(e^{b(\ln b-1)-a(\ln a-1)}\bigr)^N\cdot c_N$$ with $a<c_N<b$. The limit as $N\to\infty$ can only exist if $$b(\ln b-1)=a(\ln a-1).$$ Such pairs $(a,b)$ are rare: The function $x\mapsto x(\ln x-1)$ decreases from $1$ to $0$ on $(0,1]$, and then increases, reaching $1$ at $x=e$. Hence the only suitable pairs $(a,b)$ are with $0<a<1$ and one suitable $1<b<e$. By investigating $c_N$ closer (comparing with the trapezoidal rule), one arrives at $P(a,b)=\sqrt ab$ for such $a,b$.
Interestingly, if $P(a,b)$ exists, we also have $P(-b,-a)=-P(a,b)$. This is because the specific definition of $P(a,b)$ uses only $P(a,b; 2^n)$ and for $n>0$ we have $P(a,b,2^n)=P(-b,-a,2^n)$. (Though for me this fun result is more a hint that the definition of $P$ is not perfect)
Consider the Riemann integral $$\int_a^bx\,dx=\lim_{n\to\infty}\sum_{k=1}^nx_k\cdot\Delta x_k$$ where the limit is taken in such a way so that the maximum $\Delta x_i$ approaches $0$. You are adding up all real numbers between $a$ and $b$, but weighted by the infinitesimally small $dx$. You could try to so the same with a product. Try to multiply all numbers together, "weighted" appropriately. With a Riemann sum, you multiply each $x_i$ by $\Delta x_i$, where the sum of the $\Delta x_i$ accounts for the full difference from $a$ to $b$. I think the analogous thing would be to raise each $x_i$ by a power $Rx_i$ where the product of the $Rx_i$ accounts for the full ratio from $a$ to $b$. [$R$ stands for ratio; it would be like the ratio of two adjacent $x_i$.]
In the same way that $\int$ is a fancy "S" for sum, you could define $$\DeclareMathOperator*{\pint}{\mathcal{P}}\pint_a^b x^{Rx}=\lim_{n\to\infty}\prod_{k=1}^nx_k^{Rx_k}$$
But this is no good. In the limit, each $Rx_i$ would approach $1$, so you would be multiplying numbers that are too large together. You want the $Rx_i$ to approach $0$ so the product is of numbers close to $1$. You could consider using $\Delta x_i$ in the exponents, but unlike $Rx_i$, $\Delta x_i$ is not unitless. So it feels out of place in an exponent. I can't think of a theoretical justification, but the following seems worth exploring:
$$\DeclareMathOperator*{\pint}{\mathcal{P}}\pint_a^b x^{Rx}=\lim_{n\to\infty}\prod_{k=1}^nx_k^{\ln(Rx_k)}$$ where the limit is taken in such a way so that the $Rx_i$ approach $1$. For example, with $a=1$, $b=2$, $x_k=2^{k/n}$, $Rx_k=2^{1/n}$:
$$\begin{align} \DeclareMathOperator*{\pint}{\mathcal{P}}\pint_1^2 x^{Rx} &=\lim_{n\to\infty}\prod_{k=1}^n\left(2^{k/n}\right)^{\ln(2^{1/n})}\\ &=\lim_{n\to\infty}\prod_{k=1}^n2^{k\ln(2)/n^2}\\ &=\lim_{n\to\infty}2^{\sum_{k=1}^nk\ln(2)/n^2}\\ &=\lim_{n\to\infty}2^{\ln(2)\frac{n(n+1)}{2n^2}}\\ &=2^{\ln(2)/2}\\ &\approx1.2715\ldots \end{align}$$
Note: this gives the same result if $\ln(Rx_i)$ is replaced by $Rx_i-1$.