Is C++ context-free or context-sensitive?
First, you rightly observed there are no context sensitive rules in the grammar at the end of the C++ standard, so that grammar is context-free.
However, that grammar doesn't precisely describe the C++ language, because it produces non-C++ programs such as
int m() { m++; }
or
typedef static int int;
The C++ language defined as "the set of well-formed C++ programs" is not context-free (it's possible to show that merely demanding variables to be declared makes it so). Given you can theoretically write Turing-complete programs in templates and make a program ill-formed based on their result, it's not even context-sensitive.
Now, (ignorant) people (usually not language theorists, but parser designers) typically use "not context-free" in some of the following meanings
- ambiguous
- can't be parsed with Bison
- not LL(k), LR(k), LALR(k) or whatever parser-defined language class they chose
The grammar at the back of the standard doesn't satisfy these categories (i.e. it is ambiguous, not LL(k)...) so C++ grammar is "not context-free" for them. And in a sense, they're right it's damn well hard to produce a working C++ parser.
Note that the properties here used are only weakly connected to context-free languages - ambiguity doesn't have anything to do with context-sensitivity (in fact, context-sensitive rules typically help disambiguate productions), the other two are merely subsets of context-free languages. And parsing context-free languages is not a linear process (although parsing deterministic ones is).
Yes. The following expression has a different order of operations depending on type resolved context:
Edit: When the actual order of operation varies, it makes it incredibly difficult to use a "regular" compiler that parses to an undecorated AST before decorating it (propagating type information). Other context sensitive things mentioned are "rather easy" compared to this (not that template evaluation is at all easy).
#if FIRST_MEANING
template<bool B>
class foo
{ };
#else
static const int foo = 0;
static const int bar = 15;
#endif
Followed by:
static int foobar( foo < 2 ? 1 < 1 : 0 > & bar );
Below is my (current) favorite demonstration of why parsing C++ is (probably) Turing-complete, since it shows a program which is syntactically correct if and only if a given integer is prime.
So I assert that C++ is neither context-free nor context-sensitive.
If you allow arbitrary symbol sequences on both sides of any production, you produce an Type-0 grammar ("unrestricted") in the Chomsky hierarchy, which is more powerful than a context-sensitive grammar; unrestricted grammars are Turing-complete. A context-sensitive (Type-1) grammar allows multiple symbols of context on the left hand side of a production, but the same context must appear on the right hand side of the production (hence the name "context-sensitive"). [1] Context-sensitive grammars are equivalent to linear-bounded Turing machines.
In the example program, the prime computation could be performed by a linear-bounded Turing machine, so it does not quite prove Turing equivalence, but the important part is that the parser needs to perform the computation in order to perform syntactic analysis. It could have been any computation expressible as a template instantiation and there is every reason to believe that C++ template instantiation is Turing-complete. See, for example, Todd L. Veldhuizen's 2003 paper.
Regardless, C++ can be parsed by a computer, so it could certainly be parsed by a Turing machine. Consequently, an unrestricted grammar could recognize it. Actually writing such a grammar would be impractical, which is why the standard doesn't try to do so. (See below.)
The issue with "ambiguity" of certain expressions is mostly a red herring. To start with, ambiguity is a feature of a particular grammar, not a language. Even if a language can be proven to have no unambiguous grammars, if it can be recognized by a context-free grammar, it's context-free. Similarly, if it cannot be recognized by a context-free grammar but it can be recognized by a context-sensitive grammar, it's context-sensitive. Ambiguity is not relevant.
But in any event, like line 21 (i.e. auto b = foo<IsPrime<234799>>::typen<1>();
) in the program below, the expressions are not ambiguous at all; they are simply parsed differently depending on context. In the simplest expression of the issue, the syntactic category of certain identifiers is dependent on how they have been declared (types and functions, for example), which means that the formal language would have to recognize the fact that two arbitrary-length strings in the same program are identical (declaration and use). This can be modelled by the "copy" grammar, which is the grammar which recognizes two consecutive exact copies of the same word. It's easy to prove with the pumping lemma that this language is not context-free. A context-sensitive grammar for this language is possible, and a Type-0 grammar is provided in the answer to this question: https://math.stackexchange.com/questions/163830/context-sensitive-grammar-for-the-copy-language .
If one were to attempt to write a context-sensitive (or unrestricted) grammar to parse C++, it would quite possibly fill the universe with scribblings. Writing a Turing machine to parse C++ would be an equally impossible undertaking. Even writing a C++ program is difficult, and as far as I know none have been proven correct. This is why the standard does not attempt to provide a complete formal grammar, and why it chooses to write some of the parsing rules in technical English.
What looks like a formal grammar in the C++ standard is not the complete formal definition of the syntax of the C++ language. It's not even the complete formal definition of the language after preprocessing, which might be easier to formalize. (That wouldn't be the language, though: the C++ language as defined by the standard includes the preprocessor, and the operation of the preprocessor is described algorithmically since it would be extremely hard to describe in any grammatical formalism. It is in that section of the standard where lexical decomposition is described, including the rules where it must be applied more than once.)
The various grammars (two overlapping grammars for lexical analysis, one which takes place before preprocessing and the other, if necessary, afterwards, plus the "syntactic" grammar) are collected in Appendix A, with this important note (emphasis added):
This summary of C++ syntax is intended to be an aid to comprehension. It is not an exact statement of the language. In particular, the grammar described here accepts a superset of valid C++ constructs. Disambiguation rules (6.8, 7.1, 10.2) must be applied to distinguish expressions from declarations. Further, access control, ambiguity, and type rules must be used to weed out syntactically valid but meaningless constructs.
Finally, here's the promised program. Line 21 is syntactically correct if and only if the N in IsPrime<N>
is prime. Otherwise, typen
is an integer, not a template, so typen<1>()
is parsed as (typen<1)>()
which is syntactically incorrect because ()
is not a syntactically valid expression.
template<bool V> struct answer { answer(int) {} bool operator()(){return V;}};
template<bool no, bool yes, int f, int p> struct IsPrimeHelper
: IsPrimeHelper<p % f == 0, f * f >= p, f + 2, p> {};
template<bool yes, int f, int p> struct IsPrimeHelper<true, yes, f, p> { using type = answer<false>; };
template<int f, int p> struct IsPrimeHelper<false, true, f, p> { using type = answer<true>; };
template<int I> using IsPrime = typename IsPrimeHelper<!(I&1), false, 3, I>::type;
template<int I>
struct X { static const int i = I; int a[i]; };
template<typename A> struct foo;
template<>struct foo<answer<true>>{
template<int I> using typen = X<I>;
};
template<> struct foo<answer<false>>{
static const int typen = 0;
};
int main() {
auto b = foo<IsPrime<234799>>::typen<1>(); // Syntax error if not prime
return 0;
}
[1] To put it more technically, every production in a context-sensitive grammar must be of the form:
αAβ → αγβ
where A
is a non-terminal and α
, β
are possibly empty sequences of grammar symbols, and γ
is a non-empty sequence. (Grammar symbols may be either terminals or non-terminals).
This can be read as A → γ
only in the context [α, β]
. In a context-free (Type 2) grammar, α
and β
must be empty.
It turns out that you can also restrict grammars with the "monotonic" restriction, where every production must be of the form:
α → β
where |α| ≥ |β| > 0
(|α|
means "the length of α
")
It's possible to prove that the set of languages recognized by monotonic grammars is exactly the same as the set of languages recognized by context-sensitive grammars, and it's often the case that it's easier to base proofs on monotonic grammars. Consequently, it's pretty common to see "context-sensitive" used as though it meant "monotonic".