First-order logic where constants play the role of variables

The obvious reason to distinguish constants from variables is that sometimes we really do want constants as a part of our structure, and not constants that can be uniquely determined from the other structure like the identity of a group. For instance, a perfectly natural sort of mathematical structure to study is a commutative ring with a distinguished element called $x$ (these are also known as commutative $\mathbb{Z}[x]$-algebras). If you don't distinguish between constants and variables, you can't distinguish between this structure and the structure of a commutative ring. The key reason that this distinguished constant $x$ is different from a variable is that to define a model of this theory, you need to define an interpretation for $x$, but you don't need to define an interpretation for all the infinitely many variables you have (or "unused constant symbols", as you call them).

It gets even worse if you try to add axioms. For instance, you might also study commutative rings with a distinguished element $x$ such that $x^2=0$ (these are also known as commutative $\mathbb{Z}[x]/(x^2)$-algebras). If you add an axiom $x^2=0$ to express this, then by your $\forall$ introduction rule, your structure would need to satisfy $(\forall x)x^2=0$, which actually together with the other ring axioms implies that your ring has only one element. (This was based on a misinterpretation of what you meant)

Your argument about "Axioms with existential statements vs. axioms with constant symbols" strikes me as kind of silly. Yes, sometimes the presence of constant symbols that aren't really necessary can make it slightly more complicated to compare two theories. But it is extremely common throughout logic for theories to have multiple different "equivalent" formulations which live over different languages, possibly in much more complicated ways than just inclusion of constant symbols. For instance, you can axiomatize groups using just a binary operation which sends $(x,y)$ to $xy^{-1}$. Eliminating the distinction between constants and variables is hardly going to eliminate the need to have a robust way of defining what this sort of "equivalence" really is.

It's also unclear to me what you're trying to do with $\exists$ elimination. Yes, you can go from $\exists x:x^2=-1$ to $x^2=-1$. But stating the axioms $\exists x:x^2=-1$ does not mean that $x^2=-1$ is true and you can refer to $x$ in other axioms as though this were true. It is only true in the context of an argument in which you have derived $x^2=-1$ as an instantiation of $\exists x:x^2=-1$, in which case I don't see what you have to gain by thinking of $x$ as a "constant" instead of a "variable". Note that including an axiom $x^2=-1$ where $x$ is a constant symbol actually gives you a genuinely different mathematical structure than including an axiom $\exists x:x^2=-1$ does: in the first case, complex conjugation is not an automorphism of $\mathbb{C}$ (since it doesn't fix the distinguished constant $x$), but in the second case it is. If you're trying to say that whenever you have an axiom $\exists x:x^2=-1$ you should also be able to get $x^2=-1$ for free, then it sounds like you're saying you don't want to allow $\mathbb{C}$ as a structure in which conjugation is an automorphism.

That said, the idea of treating certain free variables as though they are new constant symbols is a very natural one that is used all the time in model theory. For instance, it is convenient to think of a formula with $n$ free variables as really being a sentence over an enlarged language which has $n$ new constant symbols (this is basically the idea of "types" in model theory).


On a fundamental level (not relative to the theory under consideration) Bourbaki does not define constants as separate from variables. To him, a constant in a theory is just a variable that does not occur (freely) in axioms of the theory. Clearly if you don't fundamentally distinguish constants from variables, then you need theories where a formula A can be a theorem without $\forall{x}A$ being a theorem; thus, the more fundamental question, perhaps, is whether such theories should be allowed. Indeed, it does seem slightly useful to define variables so that they can be the same in any language, reducing the bother of worrying about what the variables are, while it seems like $0$-ary functions (constant symbols) should depend on the language, but it doesn't feel fundamentally very useful, but just perhaps technically so.

Say you are trying to prove $A \rightarrow B$. Ideally you might want to assume $A$ by considering it an axiom temporarily, then prove $B$. But technically, without allowing axioms whose generalizations don't hold, one can't assume $A$. Instead, the modern way in practice is to temporarily pretend the free variables of $A$ are new constants and then use the theorem on constants at the end to replace the constants with variables. This is an inelegant nuisance which becomes unnecessary if one does things more like Bourbaki. True, you don't automatically get that $\forall{x}A$ is a theorem whenever $A$ is a theorem, but so what? Instead you have the equally useful rule that $\forall{x}A$ is a theorem whenever $A$ is a theorem and $x$ does not appear freely in an axiomatization of the theory (i.e., it is not a constant of the theory in Bourbaki terminology). True, when giving axioms, Bourbaki must be clear whether their generalizations or the statements themselves are intended as axioms, but that's just a mild nuisance.

Your first argument logical implication for formulas is spot on. Whether one feels it is more fundamental to interpret just the functions or the functions and their variables simultaneously, it is definitely convenient to be able to do the latter in one fell swoop, as one might by merely interpreting everything (say) into a complete Henkin extension of the theory, which one can't do if by complete one means a consistent theory generated by sentences in which every sentence or its negation is in the theory as opposed to a consistent theory not necessarily generated by sentences in which every statement or its negation is in the theory. True, the former standard notion of completeness may be more fundamental (or not!), but that's not the point--it's convenient to be able to have both concepts available with names for each.