Is volatile bool for thread control considered wrong?
There are three major problems you are facing when multithreading:
1) Synchronization and thread safety. Variables that are shared between several threads must be protected from being written to by several threads at once, and prevented from being read during non-atomic writes. Synchronization of objects can only be done through a special semaphore/mutex object which is guaranteed to be atomic by itself. The volatile keyword does not help.
2) Instruction piping. A CPU can change the order in which some instructions are executed to make code run faster. In a multi-CPU environment where one thread is executed per CPU, the CPUs pipe instructions without knowing that another CPU in the system is doing the same. Protection against instruction piping is called memory barriers. It is all explained well at Wikipedia. Memory barriers may be implemented either through dedicated memory barrier objects or through the semaphore/mutex object in the system. A compiler could possibly chose to invoke a memory barrier in the code when the volatile keyword is used, but that would be rather special exception and not the norm. I would never assume that the volatile keyword did this without having it verified in the compiler manual.
3) Compiler unawareness of callback functions. Just as for hardware interrupts, some compilers may not know that an callback function has been executed and updated a value in the middle of code execution. You can have code like this:
// main
x=true;
while(something)
{
if(x==true)
{
do_something();
}
else
{
do_seomthing_else();
/* The code may never go here: the compiler doesn't realize that x
was changed by the callback. Or worse, the compiler's optimizer
could decide to entirely remove this section from the program, as
it thinks that x could never be false when the program comes here. */
}
}
// thread callback function:
void thread (void)
{
x=false;
}
Note that this problem only appears on some compilers, depending on their optimizer settings. This particular problem is solved by the volatile keyword.
So the answer to the question is: in a multi-threaded program, the volatile keyword does not help with thread synchronization/safety, it does likely not act as a memory barrier, but it could prevent against dangerous assumptions by the compiler's optimizer.
Using volatile
is enough only on single cores, where all threads use the same cache. On multi-cores, if stop()
is called on one core and run()
is executing on another, it might take some time for the CPU caches to synchronize, which means two cores might see two different views of isRunning_
. This means run()
will run for a while after it has been stopped.
If you use synchronization mechanisms, they will ensure all caches get the same values, in the price of stalling the program for a while. Whether performance or correctness is more important to you depends on your actual needs.
volatile
can be used for such purposes. However this is an extension to standard C++ by Microsoft:
Microsoft Specific
Objects declared as volatile are (...)
- A write to a volatile object (volatile write) has Release semantics; (...)
- A read of a volatile object (volatile read) has Acquire semantics; (...)
This allows volatile objects to be used for memory locks and releases in multithreaded applications.(emph. added)
That is, as far as I understand, when you use the Visual C++ compiler, a volatile bool
is for most practical purposes an atomic<bool>
.
It should be noted that newer VS versions add a /volatile switch that controls this behavior, so this only holds if /volatile:ms
is active.
You don't need a synchronized variable, but rather an atomic variable. Luckily, you can just use std::atomic<bool>
.
The key issue is that if more than one thread accesses the same memory simultaneously, then unless the access is atomic, your entire program ceases to be in a well-defined state. Perhaps you're lucky with a bool, which is possibly getting updated atomically in any case, but the only way to be offensively certain that you're doing it right is to use atomic variables.
"Seeing codebases you work in" is probably not a very good measure when it comes to learning concurrent programming. Concurrent programming is fiendishly difficult and very few people understand it fully, and I'm willing to bet that the vast majority of homebrew code (i.e. not using dedicated concurrent libraries throughout) is incorrect in some way. The problem is that those errors may be extremely hard to observe or reproduce, so you might never know.
Edit: You aren't saying in your question how the bool is getting updated, so I am assuming the worst. If you wrap your entire update operation in a global lock, for instance, then of course there's no concurrent memory access.