Difference between constructive Dedekind and Cauchy reals in computation
Once we look at computational content, rather than constructive content, things are easier to answer. When we work on the level of individual reals, all the representations are equivalent - a real is computable in one sense if and only if it is computable in the others. However, some of the conversions are not uniform (i.e. there is not usually an algorithm that can convert arbitrary reals from one representation to another). This lack of uniformity also presents issues in a constructive context.
A very thorough treatment of the uniformity issue was given by Jeffry L. Hirst, "Representations of reals in reverse mathematics", Bulletin of the Polish Academy of Sciences, Mathematics 55 (2007), 303–316 (PDF). Hirst summarized the known relationships and verified several relationships that had not been treated in the literature.
Here is a table from the paper. It shows the subsystem of second-order arithmetic necessary to (provably) convert a sequence of reals of one form to a sequence of reals of a second form. The use of sequences is a stand-in for uniformity, and has close connections to constructive analysis.
The representations are:
- $\rho$: rapidly converging Cauchy sequence
- $\delta$: decimal expansion
- $\lambda$: Dedekind cut
- $\sigma$: open Dedekind cut
I suspect that similar kinds of results are in the literature on Weihrauch-style computable analysis, but I don't have a particular reference.
In line with what is actually asked in the bold part of the question, I have taken the liberty of changing the title from constructive mathematics to computation. As it stood it was essentially a duplicate of another one, which already contains a lot about constructive but non-computational mathematics and so that would be the appropriate forum for further such discussion.
In fact Valery tagged the answer to the end of his first paragraph, namely @AndrejBauer's Efficient Computation with Dedekind Reals, which describes the ideas behind his Marshall calculator, whilst being based on our Dedekind Reals in Abstract Stone Duality and my Lambda Calculus for Real Analysis.
The setting for this answer is my ASD programme. I refuse to carry the baggage of obsolete programming languages (Turing Machines and Gödel Codings), any form of set theory (CZF, which I think is the setting for Lubarsky and Rathjen's paper, or even a topos) or the two bolted together (computable analysis in Weihrauch's TTE).
Valery's definition of a Dedekind cut is correct, but (to underline my rejection of any form of set theory) I write $\delta d$ and $\upsilon u$ for the down and up predicates instead of his $q\in L$ and $q\in U$ for lower and upper sets.
The predicate $\delta d$ says that (we so far know that) $d\lt x$ where $x$ is the real number being defined and similarly $\upsilon u$ means that (we so far know that) $x\lt u$.
Where this knowledge is incomplete, the pair $(\delta,\upsilon)$ satisfies all but the last axiom (locatedness) and is equivalent to an interval.
In particular, Dedekind cuts are two-sided. Taking the complement would amount to deducing knowledge or termination from ignorance or divergence.
There are plenty of examples of limiting processes that define "uncomputable" numbers that are best thought of as one-sided cuts. For example, the domain of definition of the solution of a differential equation is something that is only known from the inside; it is remarkable if we can obtain some knowledge from the outside, not that the boundary is "uncomputable" in this sense.
In the case where the cut defines a rational number, it should occur on neither side (Dedekind's choice was wrong here).
Defining the usual stuff in elementary real analysis is much easier and more natural using Dedekind cuts than with Cauchy sequences. Consider Riemann integration: $\delta d$ holds if there is some polygon (piecewise linear function) that lies below the curve and has area more than $d$; deducing the various properties from this is easy. This is gratuitously difficult if you insist in advance that the polygon have a particular shape of resolution $2^{-n}$ to make a Cauchy sequence.
The essential difference between a Dedekind cut and a Cauchy sequence as the representation of the solution of a mathematical problem is that Cauchy assumes that the problem has already been solved (to each degree of precision), whereas Dedekind merely combines all the formulae ("compiles" them into an "intermediate code", if you like), so that the problem remains to be solved.
This means that deriving a Cauchy sequence or decimal expansion from a Dedekind cut in general is just as difficult as solving a general real-valued mathematical problem. So I am under no illusion about the difficulty in implementing this proposal.
But it is a novel and so valuable way of looking at things.
The predicates define open subspaces of $\mathbb R$, so I would think that, in terms of recursion, they have to be $\Sigma^0_1$ or recursively enumerable. (I have thought a little bit about "descriptive set theory" (a name that needs to be changed) but haven't got very far with it.)
These predicates $\delta$ and $\upsilon$ are in the language of ASD and you will need to read the papers that I have linked to find out the details.
In the first instance, such predicates are built from $f(\vec x)\lt g(\vec x)$, where $f$ and $g$ are polynomials, using $\land$, $\lor$, $\exists$, recusion over $\mathbb N$ and universal quantification over closed bounded intervals.
We are given that $\delta d$ and $\upsilon u$ hold for certain rationals $d$ and $u$ and want to find some $d\lt m\lt u$ that is nearer to the result and whether $\delta m$ or $\upsilon m$.
In the case of $f(\vec x)\lt g(\vec x)$ we can use the (formal, interval) derivatives of $f$ and $g$ and the Newton--Raphson algorithm to do this. As is well known, this algorithm doubles the number of bits of precision at each step. This would justify the claim that the computation is efficient if we can extend it to more general formulae.
For more complicated logical expressions, Andrej developed a method of approximating these expressions with simpler ones. See his paper for details of this. This is quite closely analogous to methods of approximating real-valued functions by low(er) degree polynomials that have a very long history but have been developed in interval computation by Michal Konecny. Whilst Michal himself has used these computationally, they have not yet been incorporated into Andrej's calculator, but I hope that this will be done in future.
Computations like this with so-called "exact real arithmetic" are necessarily non-deterministic with regard to the Cauchy sequences tat they produce, which means that the existence of the completed sequence necessarily relies on Dependent Choice. This is entirely natural from a computational point of view and I don't see what use there is in discussing it as an issue of constructivity.
As others have mentioned, there are many different variations in the exact notion of Cauchy reals and Dedekind reals which affect the answer.
I will choose variations so that I can offer a counterpoint to Paul Taylor's claim that Dedekind cuts represent a problem that "remains to be solved." If one uses the propositions-as-types correspondence to encode Dedekind cuts as "predicative subsets" $L, U : \mathbb{Q} \to \mathcal{U}$, where, $\mathcal{U}$ is the universe of types, and also reinterprets the rules of Dedekind cuts using propositions-as-types, for instance
inhabitedness: $\sum_{x : \mathbb{Q}} x \in L$ and $\sum_{x : \mathbb{Q}} x \in U$
locatedness: $\prod_{q, r : \mathbb{Q}} q < r \to (q \in L) + (r \in U)$
then Dedekind cuts do represent problems which have already been solved as well. In particular, the inhabitnedess and locatedness proofs above give the computational content. By inhabitedness, one can determine that a real number lies within some finite open interval. Then, by repeatedly using locatedness to cover this interval with several smaller ones, one can narrow down the interval where the real number must be to an arbitrarily small width.
Conversely, we can put the Cauchy definition on the same footing as the Dedekind definition by treating it as a "metric completion" in the sense of Steve Vickers's Localic completion of generalized metric spaces I. In this framework, a Cauchy real is a predicate $B : \mathbb{Q} \times \mathbb{Q}^+ \to \mathcal{U}$ on "formal balls", where $B(q, \varepsilon)$ holds if the real number is (strictly) within $\varepsilon$ from $q$. Then one of the rules which $B$ must satisfy is $$\prod_{\varepsilon : \mathbb{Q}^+} \sum_{q : \mathbb{Q}} B(q, \varepsilon),$$ which says topologically that for arbitrarily small $\varepsilon$, $\mathbb{R}$ is covered by balls with rational centers and radius $\varepsilon$, and computationally, that we can compute some $q$ within $\varepsilon$ of the real number.
The axioms defining valid Dedekind cuts as well as valid Cauchy predicates on formal balls provide a computational interface for computing with real numbers. The axioms also have a particular "geometric" form, which Vickers explains, and each suffices to define a topological space (or, more accurately, a space within the framework of locale theory or formal topology). Vickers proves (Theorem 26) that these two spaces, the order-theoretic (Dedekind) and metric (Cauchy), are homeomorphic, meaning that it is possible to use one computational interface to implement the other.
In general, the points of spaces formulated in this way may not be sets (predicatively), since one can vary the universe level of the predicative subsets. But $\mathbb{R}$ is in fact small (see Palmgren's Predicativity problems in point-free topology), so in fact the points of $\mathbb{R}$ form a set.
In terms of efficiency, one would probably consider metric API more "efficient", since approximating to a rational with a tolerance of $\varepsilon$ requires 1 "API call", whereas one needs a variable number of uses of the "locatedness" rule for Dededkind cuts, and each additional call provides only one more bit of information.