Faster arithmetic with finite continued fractions
There are many advantages to representing numbers in their continued fraction form. Calculations can be performed, using Gosper's algorithm, with as much accuracy as you'd like.
Heckman's LFT approach is the same as evaluating with Wallis' fundamental recurrence Formula. The linear fractional transform representation can be easier to understand, but the results are identical. Since all the transforms are affine there's some extra overhead with the LFT approach.
Many common mathematical constants, like $ \phi = 1 + {{1}\over{1+{{1}\over{1+\ddots}}}}$, $\pi = {4\over{1+{1\over{3+{4\over{5+\ddots}}}}}} $,$e={2+{1\over{1+{1\over{2+}}}}}$ have exact representation as continued fractions. This means exact calculations (as much precision as you'd like) of many values can be performed. In fact any quadratic surd (solution to a quadratic equations $ax^2+bx+c=0$) will have a repeating continued fraction, and conversly any repeating continued fraction represents a quadratic surd. So any quadratic solution can be represented exactly.
I've been developing a computational package based on these algorithms for about the last 5 years. The same algorithms that allow mobius and tensor transforms of continued fractions also work for polynomials, allowing manipulation of complex functions. These Euler continued functions have far superior convergence properties in the complex plane.
Note: The disadvantage you mentioned need not be true. Remember with Gosper's algorithm you're 'emiting' values that reduce the size of the integers used in calculations (analogous to reductions in Euclids Algorithm). No such reduction in size occurs with straight fraction arithmetic.