Compile-time computation (C++ v. C)
Only one thing is certain - compile-time computation makes C++ compilers necessarily more complicated and the compilation speed will necessarily be slower, because a compiler is required to do it during compilation time; see for example
constexpr int factorial(int n) {
return n <= 1 ? 1 : (n * factorial(n - 1));
}
int main(void) {
static_assert(factorial(10) == 3628800, "factorial 10 was correct");
static_assert(factorial(3) == 42, "factorial 3 was 42");
}
Which must fail to compile because of the latter static_assert
but not the former.
A C compiler does not require such complexity because there is no requirement that a C compiler must be able to calculate the value of a recursive function during compilation time. A simple C compiler can very well assemble each statement to machine code separately without having to remember what the previous statements did. The C standard certainly does not require it to be able to evaluate recursive functions during compilation time.
But it is not to say that no C compiler would do that during compilation. See this example:
#include <stdio.h>
int factorial(int n) {
return n <= 1 ? 1 : (n * factorial(n - 1));
}
int main(void) {
printf("%d\n", factorial(10));
}
Compiled with GCC 10.2 as a C program with -O3, and thanks to the as-if rule, the program became
factorial:
mov eax, 1
cmp edi, 1
jle .L4
.L3:
mov edx, edi
sub edi, 1
imul eax, edx
cmp edi, 1
jne .L3
ret
.L4:
ret
.LC0:
.string "%d\n"
main:
sub rsp, 8
mov esi, 3628800
mov edi, OFFSET FLAT:.LC0
xor eax, eax
call printf
xor eax, eax
add rsp, 8
ret
which more directly corresponds to
unsigned factorial(unsigned n) {
unsigned i = 1;
while (n > 1) {
i *= n;
n --;
}
return i;
}
int main(void) {
printf("%d\n", 3628800);
}
i.e. the compiler not only flattened the recursion to a simple while
loop but also resolved the factorial of the constant, and all without any special keywords.
Can compile-time computation be considered a key advantage of C++ v. C ?
Actually, what is important is not compilation, but software building.
Refer to Wikipedia page on build automation.
Then, be aware that many software projects (including many open source projects on github or gitlab) are generating C (or even C++) code from something more abstract (e.g. using software tools). A typical example is obviously parser generators (a.k.a. compiler-compilers) like GNU bison or ANTLR. Another example are glue code generators like rpcgen
or SWIG. And GNU autoconf adapt your build to the software packages available on your computer. Notice that both Chicken-Scheme and Bigoo are generating C code from Scheme source code. See of course this. In some cases huge C files are produced from tiny input (see also the XBM format). Maple is able to generate large C files, and there are cases where generating a lot of C code -e.g. half a million lines- makes sense (as explained in Pitrat's book Artificial beings: the conscience of a conscious machine) and blog.
At last, whole-program optimization can exist (see the -flto
flag in recent GCC for Link-Time-Optimization; you practically would compile and link with gcc -Wall -O2 -flto
) and requires some compiler support at "link-time".
In some situations, the compile time is not that important (think of e.g. compiling Firefox or the Linux kernel or LibreOffice or Gnome or GTK from its source code base), but the build time can last hours, or certainly dozens of minutes (because a lot of different translation units - concretely *.c
or *.cc
files - have to be compiled then linked).
Google is rumored to consume hours of computer time internally to build most of their internal software.
Notice that the first C++ compilers (e.g. Cfront) have been implemented as C code generators, and that a large software such as the GCC compiler has dozens of specialized C or C++ code generators. Try to build on your laptop from the available source code a GCC cross-compiler targeting your RaspBerryPi board (which is too small and underpowered to straight-compile GCC on it). Build instructions on LinuxFromScratch are then relevant.
For an example of a C program generating C code, see my manydl.c code for Linux, or my Bismon program described in this draft report. Past versions of the obsolete GCC MELT project did generate a million lines of C or C++ code. manydl.c
is able to generate then compile C code during days, and illustrates that dlopen(3) can be used many times. For an example of a C++ software generating C++ on Linux, see my RefPerSys project. Look also into tunes.org for discussions related to metaprogramming and generation of C or C++ code.
Consider also cross-compilation situations
e.g. compiling C code for an Arduino, or C++ code for your RaspberryPi on your laptop, perhaps with GCC. Or compiling on your PC code for a distant top500 supercomputer.
regarding C++ versus C
My understanding of the C++ standard n3337 is that compile-time computation is not specified there (but I don't claim to be a C++ expert). In particular, nothing forbids you to make your C++ interpreter (you could code that in C, in C++, in Ocaml, in Java, etc...). Consider that idea as an interesting programming exercise (but read the Dragon book before trying).
My opinion is that a classroom of students learning C++ could be considered as a C++ implementation, as specified in that C++ standard. A good way of teaching C++ is to ask the classroom about the semantics of several C++ programs, and that can be taught with pencil and paper or a whiteboard. I actually taught a course on operational semantics that way (at Paris 6 University). The board was black, and I used chalks of various colors.
Look also into software tools like Frama-C or Clang static analyzer. Both are open source, so you could study their source.
Is this this one manner in which a C++ programs can achieve better performance (e.g. speed) versus an equivalent C program ?
That it your opinion, and I disagree. What make you think that the runtime of Ocaml or of SBCL would be faster (you should download and study the source code) if it has been written in C++ ? An interesting exercise could be to recode in C++ the tinyCC compiler (for C, targeting x86 32 bits and x86-64 bits on Linux, coded in C), and benchmark any improvement. That simple but clever compiler is compiling C code very quickly, but make too few compiler optimizations.