How do we know that nonperturbative canonical quantum gravity is wrong?

The problem is that the perturbation series, even in the best behaved theories, is not a sufficient criteria for reconstructing the theory. In the case of QCD, you can reconstruct the non-perturbative theory by defining a path integral on a lattice, and taking the limit of a fine lattice with the coupling logarithmically going to zero as the lattice spacing gets smaller, and this makes a consistent continuum limit which defines the non-perturbative path integral. This definition is computational and absolute--- it gives you an algorithm to compute all correlation functions in the theory.

For quantum gravity, you can start with a flat metric and do a perturbation series, and get the graviton interactions. But there is no reason to believe that there is a non-perturbative theory you are approximating when you do this. The path integral for quantum gravity is not lattice regularized very well, because the lattice spacing is dynamical--- you have a metric that tells you what the actual distance between lattice points is. When you take the limit of small lattice distance, there is no guarantee that you have a well defined quantity.

Further, the path integral might include sums over non-equivalent topologies. You could imagine a handle popping out of space time and disappearing later. If this is so, and if the sum is over arbitrarily small space-time structure, then there is a serious problem, because high dimensional topologies are known to be non-classifiable, so that it is impossible to give an algorithm which will sum over each topology once and only once. You can given an algorithm on simplices which will sum over all topologies in a redundant way, by summing over all possible gluing of the simplicies. But if you think the continuum object is well defined, then it seems that the simplex sum should reproduce the sum over topologies, which is a non-computable thing. This suggested to Penrose that the full theory of quantum gravity is capable of hyper-computation (stronger than Turing computation), but I personally am sure the concept of hyper-computation of this type is with scientific certainty an incoherent concept in a logical sense, since the logical properties of hypercomputation cannot be described in any finite way using axioms, even allowing the axiom system to increase in complexity with time.

Even if you just look at the perturbation series, and try to make sense of this, there is a serious problem when the scattering of particles is Planckian or above. If you hit two particles at the Planck energies or more, you should produce an intermediate black hole state, and the sum over intermediate states should then be over the number of degrees of freedom of this intermediate black hole. But a black hole of radius R only has R^2 worth of degrees of freedom, while the volume degrees of freedom are R^3. So the scaling laws of the perturbation theory particles for the maximum amount of information in a given region, one which can contain a black hole, is not consistent with gravitational holography.

Transitioning to an S-matrix picture resolves all these problems, because it gives string theory. In string theory, the perturbation series is on S-matrix particle states, not on field states, so that the intermediate states are not localized at individual space-time points. The sum over intermediate states reproduces an extended object fluctuations, whose degree-of-freedom count is holographically consistent. The algebra of external operators is by insertions on the string world sheet (or on a brane world-volume theory), and the number of degrees of freedom on the classical limit of large branes or black holes has the correct holographic scaling. This is not a surprise for gravity, but it is not possible with a naive field theory, because the field theory has many more degrees of freedom at short distances.

t'Hooft's argument

The essence of the very wordy argument above can be explained in a short calculation by t'Hooft. He asked, given a Schwartschild horizon, what is the entropy that you can store in the fields just outside the horizon. You have a fixed energy, and you assume the black hole is enormous, and you ask, how many different microstates can you fit in the region R>2M.

The answer is easily seen to be divergent. At energy E, the redshift factor introduces a factor of $\sqrt{r-2M}$ (near horizon approximation), which shifts energies to the red region. If you fix the total energy, the number of modes of energy less than E in a volume V has a field theory scaling law determined by doing Fourier transforms in a box, and this scaling law gives VE^4 (it's the same as the vacuum energy divergence, since it is counting all the modes once and only once). Because the energy redshifts, you get a divergent integral when you look outside the black hole horizon, so that the number of states of energy less than E near a black hole horizon is divergent in any field theory.

The resolution for this paradox is to adopt an S-matrix picture for black holes, and renounce most of these degrees of freedom as unphysical. This means that the space-time around a black hole is only a reconstruction from the much smaller number of degrees of freedom of the black hole itself. This is the origin of the principle of holography, and the principle is correct for string perturbation theory.

Within loop quantum gravity, the regulator is completely different, and might not be consistent, I am not sure, because I do not understand it well enough. The regulated theory should reproduce an S-matrix type thing when it has an S-matrix, but such states are not known in the loop gravity. The knot representation, however, makes loops and cuts down the field theoretic degrees of freedom in a way that is reminiscent of holography, so it is isn't ruled out automatically.

But just doing a path integral over spacetime fields when the spacetime includes black holes is plain impossible. Not because of renormalizability (you are right that this is not an issue--- it would be fixed by an ultraviolet fixed point, or ultraviolet safety in Weinberg's terminology) but because the number of degrees of freedom in the exterior of a black hole is too large to be physical, leading to a divergent additive constant in the black hole entropy which is physically ridiculous. A quantum field theory of gravity would, if it were consistent, have to be a remnant theory, and this is physically preposterous.

I am sorry that the above sounds more hand-waving than it is, this is more a limitation of my exposition style than the content. The papers of t'Hooft are from 85-93 era in Nuclear Physics B, and the papers of Susskind on holography in string theory are well known classics.


Loop Quantum Gravity is an example of a non-perturbative approach to canonical quantum gravity. In fact it's mentioned in the Wiki article you linked to. Inevitably views about LQG vary, but no-one has proven it wrong in the sense that it is experimentally disproven or mathematically inconsistent.


There is indeed nothing wrong with it; people (e.g. 't Hooft) have done perturbative calculations with canoncial quantum gravity (or simply with a L=R^2 Lagrangian). In this sense it's seen as an effective field theory at lower energy scales. Google on 'Wilsonian effective field theory' for more information (I couldn't find a Wiki page).

However, we would like to know what happens at higher energy scales (regardless of the question if they ever can be reached with earth-based experiments). There is a strong feeling that there should be some more fundamental theory which has GR (and the standard model, for that matter) as it's lower effective descriptions.