benchmarking, code reordering, volatile
A colleague pointed out that I should declare the start and stop variables as volatile to avoid code reordering.
Sorry, but your colleague is wrong.
The compiler does not reorder calls to functions whose definitions are not available at compile time. Simply imagine the hilarity that would ensue if compiler reordered such calls as fork
and exec
or moved code around these.
In other words, any function with no definition is a compile time memory barrier, that is, the compiler does not move subsequent statements before the call or prior statements after the call.
In your code calls to std::clock
end up calling a function whose definition is not available.
I can not recommend enough watching atomic Weapons: The C++ Memory Model and Modern Hardware because it discusses misconceptions about (compile time) memory barriers and volatile
among many other useful things.
Nevertheless, I added volatile and found that not only did the benchmark take significantly longer, it also was wildly inconsistent from run to run. Without volatile (and getting lucky to ensure the code wasn't reordered), the benchmark consistently took 600-700 ms. With volatile, it often took 1200 ms and sometimes more than 5000 ms
Not sure if volatile
is to blame here.
The reported run-time depends on how the benchmark is run. Make sure you disable CPU frequency scaling so that it does not turn on turbo mode or switches frequency in the middle of the run. Also, micro-benchmarks should be run as real-time priority processes to avoid scheduling noise. It could be that during another run some background file indexer starts competing with your benchmark for the CPU time. See this for more details.
A good practice is to measure times it takes to execute the function a number of times and report min/avg/median/max/stdev/total time numbers. High standard deviation may indicate that the above preparations are not performed. The first run often is the longest because the CPU cache may be cold and it may take many cache misses and page faults and also resolve dynamic symbols from shared libraries on the first call (lazy symbol resolution is the default run-time linking mode on Linux, for example), while subsequent calls are going to execute with much less overhead.
The usual way to prevent reordering is a compile barrier i.e asm volatile ("":::"memory");
(with gcc). This is an asm instruction which does nothing, but we tell the compiler it will clobber memory, so it is not permitted to reorder code across it. The cost of this is only the actual cost of removing the reorder, which obviously is not the case for changing the optimisation level etc as suggested elsewhere.
I believe _ReadWriteBarrier
is equivalent for Microsoft stuff.
Per Maxim Yegorushkin's answer, reordering is unlikely to be the cause of your issues though.
Related problem: how to stop the compiler from hoisting a tiny repeated calculation out of a loop
I couldn't find this anywhere - so adding my own answer 11 years after the question was asked ;).
Using volatile on variables is not what you want for that. That will cause the compiler to load and store those variable from and to RAM every single time (assuming there is a side effect of that that must be preserved: aka - good for I/O registers). When you are bench marking you are not interested in measuring how long it takes to get something from memory, or write it there. Often you just want your variable to be in CPU registers.
volatile
is usable if you assign to it once outside a loop that doesn't get optimized away (like summing an array), as an alternative to printing the result. (Like the long-running function in the question). But not inside a tiny loop; that will introduce store/reload instructions and store-forwarding latency.
I think that the ONLY way to submit your compiler into not optimizing your benchmark code to hell is by using asm
. This allows you to fool the compiler into thinking it doesn't know anything about your variables content, or usage, so it has to do everything every single time, as often as your loop asks it to.
For example, if I wanted to benchmark m & -m
where m is some uint64_t
, I could try:
uint64_t const m = 0x0000080e70100000UL;
for (int i = 0; i < loopsize; ++i)
{
uint64_t result = m & -m;
}
The compiler would obviously say: I'm not even going to calculate that; since you're not using the result. Aka, it would actually do:
for (int i = 0; i < loopsize; ++i)
{
}
Then you can try:
uint64_t const m = 0x0000080e70100000UL;
static uint64_t volatile result;
for (int i = 0; i < loopsize; ++i)
{
result = m & -m;
}
and the compiler says, ok - so you want me to write to result every time and do
uint64_t const m = 0x0000080e70100000UL;
uint64_t tmp = m & -m;
static uint64_t volatile result;
for (int i = 0; i < loopsize; ++i)
{
result = tmp;
}
Spending a lot of time writing to the memory address of result
loopsize
times, just as you asked.
Finally you could also make m
volatile, but the result would look like this in assembly:
507b: ba e8 03 00 00 mov $0x3e8,%edx
# top of loop
5080: 48 8b 05 89 ef 20 00 mov 0x20ef89(%rip),%rax # 214010 <m_test>
5087: 48 8b 0d 82 ef 20 00 mov 0x20ef82(%rip),%rcx # 214010 <m_test>
508e: 48 f7 d8 neg %rax
5091: 48 21 c8 and %rcx,%rax
5094: 48 89 44 24 28 mov %rax,0x28(%rsp)
5099: 83 ea 01 sub $0x1,%edx
509c: 75 e2 jne 5080 <main+0x120>
Reading from memory twice and writing to it once, besides the requested calculation with registers.
The correct way to do this is therefore:
for (int i = 0; i < loopsize; ++i)
{
uint64_t result = m & -m;
asm volatile ("" : "+r" (m) : "r" (result));
}
which results in the assembly code (from gcc8.2 on the Godbolt compiler explorer):
# gcc8.2 -O3 -fverbose-asm
movabsq $8858102661120, %rax #, m
movl $1000, %ecx #, ivtmp_9 # induction variable tmp_9
.L2:
mov %rax, %rdx # m, tmp91
neg %rdx # tmp91
and %rax, %rdx # m, result
# asm statement here, m=%rax result=%rdx
subl $1, %ecx #, ivtmp_9
jne .L2
ret
Doing exactly the three requested assembly instructions inside the loop, plus a sub and jne for the loop overhead.
The trick here is that by using the asm volatile
1 and tell the compiler
"r"
input operand: it uses the value ofresult
as input so the compiler has to materialize it in a register."+r"
input/output operand:m
stays in the same register but is (potentially) modified.volatile
: it has some mysterious side effect and/or is not a pure function of the inputs; the compiler must execute it as many times as the source does. This forces the compiler to leave your test snippet alone and inside the loop. See the gcc manual's Extended Asm#Volatile section.
footnote 1: The volatile
is required here or the compiler will turn this into an empty loop. Non-volatile asm (with any output operands) is considered a pure function of its inputs that can be optimized away if the result is unused. Or CSEd to only run once if used multiple times with the same input.
Everything below is not mine-- and I do not necessarily agree with it. --Carlo Wood
If you had used asm volatile ("" : "=r" (m) : "r" (result));
(with an "=r"
write-only output), the compiler might choose the same register for m
and result
, creating a loop-carried dependency chain that tests the latency, not throughput, of the calculation.
From that, you'd get this asm:
5077: ba e8 03 00 00 mov $0x3e8,%edx
507c: 0f 1f 40 00 nopl 0x0(%rax) # alignment padding
# top of loop
5080: 48 89 e8 mov %rbp,%rax # copy m
5083: 48 f7 d8 neg %rax # -m
5086: 48 21 c5 and %rax,%rbp # m &= -m instead of using the tmp as the destination.
5089: 83 ea 01 sub $0x1,%edx
508c: 75 f2 jne 5080 <main+0x120>
This will run at 1 iteration per 2 or 3 cycles (depending on whether your CPU has mov-elimination or not.) The version without a loop-carried dependency can run at 1 per clock cycle on Haswell and later, and Ryzen. Those CPUs have the ALU throughput to run at least 4 uops per clock cycle.
This asm corresponds to C++ that looks like this:
for (int i = 0; i < loopsize; ++i)
{
m = m & -m;
}
By misleading the compiler with a write-only output constraint, we've created asm that doesn't look like the source (which looked like it was computing a new result from a constant every iteration, not using result as an input to the next iteration..)
You might want to microbenchmark latency, so you can more easily detect the benefit of compiling with -mbmi
or -march=haswell
to let the compiler use blsi %rax, %rax
and calculate m &= -m;
in one instruction. But it's easier to keep track of what you're doing if the C++ source has the same dependency as the asm, instead of fooling the compiler into introducing a new dependency.
Volatile ensures one thing, and one thing only: reads from a volatile variable will be read from memory every time -- the compiler won't assume that the value can be cached in a register. And likewise, writes will be written through to memory. The compiler won't keep it around in a register "for a while, before writing it out to memory".
In order to prevent compiler reordering you may use so called compiler fences. MSVC includes 3 compiler fences:
_ReadWriteBarrier() - full fence
_ReadBarrier() - two-sided fence for loads
_WriteBarrier() - two-sided fence for stores
ICC includes __memory_barrier() full fence.
Full fences are usually the best choice because there is no need in finer-granularity on this level (compiler fences are basically costless in run-time).
Statment reordering (which most compiler do when optimization is enabled), thats the also main reason why certain program fail to operation operation when compiled with compiler optimzation.
Will suggest to read http://preshing.com/20120625/memory-ordering-at-compile-time to see potential issues we can face with compiler reoredering etc.