Is there any formal foundation to ultrafinitism?
Wikipedia also says that Troelstra said in 1988 that there were no satisfactory foundations for ultrafinitism. Is this still true? Even if so, are there any aspects of ultrafinitism that you can get your hands on coming from a purely classical perspective?
There are no foundations for ultrafinitism as satisfactory for it as (say) intuitionistic logic is for constructivism. The reason is that the question of what logic is appropriate for ultrafinitism is still an open one, for not one but several different reasons.
First, from a traditional perspective -- whether classical or intuitionistic -- classical logic is the appropriate logic for finite collections (but not K-finite). The idea is that a finite collection is surveyable: we can enumerate and look at each element of any finite collection in finite time. (For example, the elementary topos of finite sets is Boolean.) However, this is not faithful to the ultra-intuitionist idea that a sufficiently large set is impractical to survey.
So it shouldn't be surprising that more-or-less ultrafinitist logics arise from complexity theory, which identifies "practical" with "polynomial time". I know two strands of work on this. The first is Buss's work on $S^1_2$, which is a weakening of Peano arithmetic with a weaker induction principle:
$$A(0) \land (\forall x.\;A(x/2) \Rightarrow A(x)) \Rightarrow \forall x.\;A(x)$$
Then any proof of a forall-exists statement has to be realized by a polynomial time computable function. There is a line of work on bounded set theories, which I am not very familiar with, based on Buss's logic.
The second is a descendant of Bellantoni and Cook's work on programming languages for polynomial time, and Girard's work on linear logic. The Curry-Howard correspondence takes functional languages, and maps them to logical systems, with types going to propositions, terms going to proofs, and evaluation going to proof normalization. So the complexity of a functional program corresponds in some sense to the practicality of cut-elimination for a logic.
IIRC, Girard subsequently showed that for a suitable version of affine logic, cut-elimination can be shown to take polynomial time. Similarly, you can build set theories on top of affine logic. For example, Kazushige Terui has since described a set theory, Light Affine Set Theory, whose ambient logic is linear logic, and in which the provably total functions are exactly the polytime functions. (Note that this means that for Peano numerals, multiplication is total but exponentiation is not --- so Peano and binary numerals are not isomorphic!)
The reason these proof-theoretic questions arise, is that part of the reason that the ultra-intuitionist conception of the numerals makes sense, is precisely because they deny large proofs. If you deny that large integers exist, then a proof that they exist, which is larger than the biggest number you accept, doesn't count! I enjoyed Vladimir Sazonov's paper "On Feasible Numbers", which explicitly studies the connection.
I should add that I am not a specialist in this area, and what I've written is just the fruits of my interest in the subject -- I have almost certainly overlooked important work, for which I apologize.
I've been interested in this question for some time. I haven't put any serious thought into it, so all I can offer is a further question rather than an answer. (I'm interested in the answers that have already been given though.) My question is this. Is there a system of logic that will allow us to prove only statements that have physical meaning? I don't have a formal definition of "physically meaningful" so instead let me try to illustrate what I mean by an example or two.
Consider first the statement that the square root of 2 is irrational. What would be its physical meaning? A naive suggestion would be that if you drew an enormous grid of squares of side length one centimetre and then measured the distance between (0,0) and (n,n) for some n, then the result would never be an integer number of centimetres. But this isn't physically meaningful according to my nonexistent definition because you can't measure to infinite accuracy. However, the more finitistic statement that the square root of 2 can't be well approximated by irrationals has at least some meaning: it tells us that if n isn't too large then there will be an appreciable difference between the distance from (0,0) to (n,n) and the nearest integer.
As a second example, take the statement that the sum of the first n positive integers is n(n+1)/2. If n is too huge, then there is no hope of arranging a huge triangular array and counting how many points are in it. So one can't check this result experimentally once n is above a certain threshold (though there might be ingenious ways of checking it that are better than the obvious method). This shows that we can't apply unconstrained induction, but there could be a principle that said something like, "If you keep on going for as long as is practical, then the result will always hold."
One attitude one might take is that this would be to normal classical mathematics as the use of epsilons and deltas is to the mathematics of infinities and infinitesimals. One could try to argue that statements that appear to be about arbitrarily large integers or arbitrarily small real numbers (or indeed any real numbers to an arbitrary accuracy) are really idealizations that are a convenient way of talking about very large integers, very small real numbers and very accurate measurements.
If I try to develop this kind of idea I rapidly run into difficulties. For example, what is the status of the argument that proves that the sum of the first n integers is what it is because you can pair them off in a nice way? In general, if we have a classical proof that something will be the case for every n, what do we gain from saying (in some other system) that the conclusion of the proof holds only for every "feasible" n? Why not just say that the classical result is valid, and that this implies that all its "feasible manifestations" are valid?
Rather than continue with these amateur thoughts, I'd just like to ask whether similar ideas are out there in a better form. Incidentally, I'm not too fond of Zeilberger's proposal because he has a very arbitrary cutoff for the highest integer -- I'd prefer something that gets fuzzier as you get larger.
Edit: on looking at the Sazonov paper, I see that many of these thoughts are in the introduction, so that is probably a pretty good answer to my question. I'll see whether I find what he does satisfactory.
Here is an ultrafinitist manifesto (link) I have co-written a few years ago:
http://arxiv.org/pdf/cs/0611100v1
It is absolutely not a finished paper (there are a few inaccuracies, and many parts are sloppy), but it contains a brief history of ultrafinitistic ideas from the early greeks all the way to present time, a number of refs, as well as a sketch of a programme towards a model theory and a proof theory of ultrafinitistic mathematics.
You may also want to google the FOM list on "ultrafinitism", there are a few posts by Podnieks, Sazonov, myself, and a few others pro and contra ultrafinitism.