DMB instructions in an interrupt-safe FIFO
TL:DR yes, LL/SC (STREX/LDREX) can be good for interrupt latency compared to disabling interrupts, by making an atomic RMW interruptible with a retry.
This may come at the cost of throughput, because apparently disabling / re-enabling interrupts on ARMv7 is very cheap (like maybe 1 or 2 cycles each for cpsid if
/ cpsie if
), especially if you can unconditionally enable interrupts instead of saving the old state. (Temporarily disable interrupts on ARM).
The extra throughput costs are: if LDREX/STREX are any slower than LDR / STR on Cortex-M4, a cmp/bne (not-taken in the successful case), and any time the loop has to retry the whole loop body runs again. (Retry should be very rare; only if an interrupt actually comes in while in the middle of an LL/SC in another interrupt handler.)
C11 compilers like gcc don't have a special-case mode for uniprocessor systems or single-threaded code, unfortunately. So they don't know how to do code-gen that takes advantage of the fact that anything running on the same core will see all our operations in program order up to a certain point, even without any barriers.
(The cardinal rule of out-of-order execution and memory reordering is that it preserves the illusion of a single-thread or single core running instructions in program order.)
The back-to-back dmb
instructions separated only by a couple ALU instructions are redundant even on a multi-core system for multi-threaded code. This is a gcc missed-optimization, because current compilers do basically no optimization on atomics. (Better to be safe and slowish than to risk ever being too weak. It's hard enough to reason about, test, and debug lockless code without worrying about possible compiler bugs.)
Atomics on a single-core CPU
You can vastly simplify it in this case by masking after an atomic_fetch_add
, instead of simulating an atomic add with earlier rollover using CAS. (Then readers must mask as well, but that's very cheap.)
And you can use memory_order_relaxed
. If you want reordering guarantees against an interrupt handler, use atomic_signal_fence
to enforce compile-time ordering without asm barriers against runtime reordering. User-space POSIX signals are asynchronous within the same thread in exactly the same way that interrupts are asynchronous within the same core.
// readers must also mask _head & (FIFO_LEN - 1) before use
// Uniprocessor but with an atomic RMW:
int32_t acquire_head_atomicRMW_UP(void)
{
atomic_signal_fence(memory_order_seq_cst); // zero asm instructions, just compile-time
int32_t old_h = atomic_fetch_add_explicit(&_head, 1, memory_order_relaxed);
atomic_signal_fence(memory_order_seq_cst);
int32_t new_h = (old_h + 1) & (FIFO_LEN - 1);
return new_h;
}
On the Godbolt compiler explorer
@@ gcc8.2 -O3 with your same options.
acquire_head_atomicRMW:
ldr r3, .L4 @@ load the static address from a nearby literal pool
.L2:
ldrex r0, [r3]
adds r2, r0, #1
strex r1, r2, [r3]
cmp r1, #0
bne .L2 @@ LL/SC retry loop, not load + inc + CAS-with-LL/SC
adds r0, r0, #1 @@ add again: missed optimization to not reuse r2
ubfx r0, r0, #0, #10
bx lr
.L4:
.word _head
Unfortunately there's no way I know of in C11 or C++11 to express a LL/SC atomic RMW that contains an arbitrary set of operations, like add and mask, so we could get the ubfx inside the loop and part of what gets stored to _head
. There are compiler-specific intrinsics for LDREX/STREX, though: Critical sections in ARM.
This is safe because _Atomic
integer types are guaranteed to be 2's complement with well-defined overflow = wraparound behaviour. (int32_t
is already guaranteed to be 2's complement because it's one of the fixed-width types, but the no-UB-wraparound is only for _Atomic
). I'd have used uint32_t
, but we get the same asm.
Safely using STREX/LDREX from inside an interrupt handler:
ARM® Synchronization Primitives (from 2009) has some details about the ISA rules that govern LDREX/STREX. Running an LDREX initializes the "exclusive monitor" to detect modification by other cores (or by other non-CPU things in the system? I don't know). Cortex-M4 is a single-core system.
You can have a global monitor for memory shared between multiple CPUs, and local monitors for memory that's marked non-shareable. That documentation says "If a region configured as Shareable is not associated with a global monitor, Store-Exclusive operations to that region always fail, returning 0 in the destination register." So if STREX seems to always fail (so you get stuck in a retry loop) when you test your code, that might be the problem.
An interrupt does not abort a transaction started by an LDREX. If you were context-switching to another context and resuming something that might have stopped right before a STREX, you could have a problem. ARMv6K introduced clrex
for this, otherwise older ARM would use a dummy STREX to a dummy location.
See When is CLREX actually needed on ARM Cortex M7?, which makes the same point I'm about to, that CLREX is often not needed in an interrupt situation, when not context-switching between threads.
(Fun fact: a more recent answer on that linked question points out that Cortex M7 (or Cortex M in general?) automatically clears the monitor on interrupt, meaning clrex is never necessary in interrupt handlers. The reasoning below can still apply to older single-core ARM CPUs with a monitor that doesn't track addresses, unlike in multi-core CPUs.)
But for this problem, the thing you're switching to is always the start of an interrupt handler. You're not doing pre-emptive multi-tasking. So you can never switch from the middle of one LL/SC retry loop to the middle of another. As long as STREX fails the first time in the lower-priority interrupt when you return to it, that's fine.
That will be the case here because a higher-priority interrupt will only return after it does a successful STREX (or didn't do any atomic RMWs at all).
So I think you're ok even without using clrex
from inline asm, or from an interrupt handler before dispatching to C functions. The manual says a Data Abort exception leaves the monitors architecturally undefined, so make sure you CLREX in that handler at least.
If an interrupt comes in while you're between an LDREX and STREX, the LL has loaded the old data in a register (and maybe computed a new value), but hasn't stored anything back to memory yet because STREX hadn't run.
The higher-priority code will LDREX, getting the same old_h
value, then do a successful STREX of old_h + 1
. (Unless it is also interrupted, but this reasoning works recursively). This might possibly fail the first time through the loop, but I don't think so. Even if so, I don't think there can be a correctness problem, based on the ARM doc I linked. The doc mentioned that the local monitor can be as simple as a state-machine that just tracks LDREX and STREX instructions, letting STREX succeed even if the previous instruction was an LDREX for a different address. Assuming Cortex-M4's implementation is simplistic, that's perfect for this.
Running another LDREX for the same address while the CPU is already monitoring from a previous LDREX looks like it should have no effect. Performing an exclusive load to a different address would reset the monitor to open state, but for this it's always going to be the same address (unless you have other atomics in other code?)
Then (after doing some other stuff), the interrupt handler will return, restoring registers and jumping back to the middle of the lower-priority interrupt's LL/SC loop.
Back in the lower-priority interrupt, STREX will fail because the STREX in the higher-priority interrupt reset the monitor state. That's good, we need it to fail because it would have stored the same value as the higher-priority interrupt that took its spot in the FIFO. The cmp
/ bne
detects the failure and runs the whole loop again. This time it succeeds (unless interrupted again), reading the value stored by the higher-priority interrupt and storing & returning that + 1.
So I think we can get away without a CLREX anywhere, because interrupt handlers always run to completion before returning to the middle of something they interrupted. And they always begin at the beginning.
Single-writer version
Or, if nothing else can be modifying that variable, you don't need an atomic RMW at all, just a pure atomic load, then a pure atomic store of the new value. (_Atomic
for the benefit or any readers).
Or if no other thread or interrupt touches that variable at all, it doesn't need to be _Atomic
.
// If we're the only writer, and other threads can only observe:
// again using uniprocessor memory order: relaxed + signal_fence
int32_t acquire_head_separate_RW_UP(void) {
atomic_signal_fence(memory_order_seq_cst);
int32_t old_h = atomic_load_explicit(&_head, memory_order_relaxed);
int32_t new_h = (old_h + 1) & (FIFO_LEN - 1);
atomic_store_explicit(&_head, new_h, memory_order_relaxed);
atomic_signal_fence(memory_order_seq_cst);
return new_h;
}
acquire_head_separate_RW_UP:
ldr r3, .L7
ldr r0, [r3] @@ Plain atomic load
adds r0, r0, #1
ubfx r0, r0, #0, #10 @@ zero-extend low 10 bits
str r0, [r3] @@ Plain atomic store
bx lr
This is the same asm we'd get for non-atomic head
.
Your code is written in a very not "bare metal" way. Those "general" atomic functions do not know if the value read or stored is located in the internal memory or maybe it is a hardware register located somewhere far from the core and connected via buses and sometimes write/read buffers.
That is the reason why the generic atomic function has to place so many DMB instructions. Because you read or write the internal memory location they are not needed at all (M4 does not have any internal cache so this kind of strong precautions are not needed as well)
IMO it is just enough to disable the interrupts when you want to access the memory location the atomic way.
PS the stdatomic is in a very rare use in the bare metal uC development.
The fastest awy to guarantee the exclusive access on M4 uC is to disable and enable the interrupts.
__disable_irq();
x++;
__enable_irq();
71 __ASM volatile ("cpsid i" : : : "memory");
080053e8: cpsid i
79 x++;
080053ea: ldr r2, [pc, #160] ; (0x800548c <main+168>)
080053ec: ldrb r3, [r2, #0]
080053ee: adds r3, #1
080053f0: strb r3, [r2, #0]
60 __ASM volatile ("cpsie i" : : : "memory");
which will cost only 2 or 4 additional clocks for both instructions.
It guarantees the atomicity and does not provide unnecessary overhead