Determine optimization level in preprocessor?
Some system-specific preprocessor macros exist, depending on your target. For example, the Microchip-specific XC16 variant of gcc (currently based on gcc 4.5.1) has the preprocessor macro __OPTIMIZATION_LEVEL__
, which takes on values 0, 1, 2, s, or 3.
Note that overriding optimization for a specific routine, e.g. with __attribute__((optimize(0)))
, does not change the value of __OPTIMIZE__
or __OPTIMIZATION_LEVEL__
within that routine.
I believe this is not possible to know directly the optimization level used to compile the software as this is not in the list of defined preprocessor symbols
You could rely on -DNDEBUG
(no debug) which is used to disable assertions in release code and enable your "debug" code path in this case.
However, I believe a better thing to do is having a system wide set of symbols local to your project and let the user choose what to use explicitly.:
MYPROJECT_DNDEBUG
MYPROJECT_OPTIMIZE
MYPROJECT_OPTIMIZE_AGGRESSIVELY
This makes debugging or the differences of behavior between release/debug much easier as you can incrementally turn on/off the different behaviors.
I don't know if this is clever hack, but it is a hack.
$ gcc -Xpreprocessor -dM -E - < /dev/null > 1
$ gcc -Xpreprocessor -dM -O -E - < /dev/null > 2
$ diff 1 2
53a54
> #define __OPTIMIZE__ 1
68a70
> #define _FORTIFY_SOURCE 2
154d155
< #define __NO_INLINE__ 1
clang didn't produce the FORTIFY one.