What setup does REP do?
The quote that you have given only applies to Nehalem microarchitecture (Intel Core i5, i7 and Xeon processors released in 2009 and 2010), and the Intel is explicit about it.
Before Nehalem, REP MOVSB was even slower. Intel is silent on what had happened in subsequent microarchitectures, but, then, with the Ivy Bridge microarchtecture (processors released in 2012 and 2013) Intel has introduced Enhanced REP MOVSB (we still need to check the corresponding CPUID bit) that allowed us to copy memory fast.
Cheapest versions of later processors - Kaby Lake "Celeron" and "Pentium", released in 2017, don't have AVX that could have been used for fast memory copy, but they still have the Enhanced REP MOVSB. That's why REP MOVSB is very beneficial on the processors released since 2013.
Surprisingly, Nehalem processors had quite fast REP MOVSD/MOVSQ implementation (but not REP MOVSW/MOVSB) for very large-sized blocks - just 4 cycles to copy each subsequent 64 bytes of data (if the data is aligned to cache line boundaries) after we've paid startup costs of 40 cycles - which is excellent when we copy 256 bytes and more, and you don't need to use XMM registers!
Thus, on Nehalem microarchitecture, REP MOVSB/MOVSW is almost useless, but REP MOVSD/MOVSQ is excellent when we need to copy more than 256 bytes of data and the data is aligned to cache line boundaries.
On previous Intel microarchitectures (before 2008) the startup costs are even higher. Intel x86 processors have had "fast strings" since the Pentium Pro (P6) in 1996. The P6 fast strings took REP MOVSB and larger, and implemented them with 64 bit microcode loads and stores and a non-RFO (Read For Ownership) cache protocol. They did not violate memory ordering, unlike ERMSB in Ivy Bridge.
The Ice Lake microarchitecture launched in September 2019 introduced the Fast Short REP MOV (FSRM). This feature can be tested by a CPUID bit. It was intended for strings of 128 bytes and less to also be quick, but, in fact, strings before 64 bytes are still slower with rep movsb than with, for example, simple 64-bit register copy. Besides that, FSRM is only implemented under 64-bit, not under 32-bit. At least on my i7-1065G7 CPU, rep movsb is only quick for small strings under 64-bit, but on the 32-bit architecture, strings have to be at least 4KB in order for rep movsb to start outperforming other methods.
Here are the tests of REP MOVS* when the source and destination was in L1 cache, of blocks large enough to not be seriously affected by startup costs, but not that large to exceed the L1 cache size. Source: http://users.atw.hu/instlatx64/
Yonah (2006-2008)
REP MOVSB 10.91 B/c
REP MOVSW 10.85 B/c
REP MOVSD 11.05 B/c
Nehalem (2009-2010)
REP MOVSB 25.32 B/c
REP MOVSW 19.72 B/c
REP MOVSD 27.56 B/c
REP MOVSQ 27.54 B/c
Westmere (2010-2011)
REP MOVSB 21.14 B/c
REP MOVSW 19.11 B/c
REP MOVSD 24.27 B/c
Ivy Bridge (2012-2013) - with Enhanced REP MOVSB
REP MOVSB 28.72 B/c
REP MOVSW 19.40 B/c
REP MOVSD 27.96 B/c
REP MOVSQ 27.89 B/c
SkyLake (2015-2016) - with Enhanced REP MOVSB
REP MOVSB 57.59 B/c
REP MOVSW 58.20 B/c
REP MOVSD 58.10 B/c
REP MOVSQ 57.59 B/c
Kaby Lake (2016-2017) - with Enhanced REP MOVSB
REP MOVSB 58.00 B/c
REP MOVSW 57.69 B/c
REP MOVSD 58.00 B/c
REP MOVSQ 57.89 B/c
As you see, the implementation of REP MOVS differs significantly from one microarchitecture to another.
According to Intel, on Nehalem, REP MOVSB startup costs for strings larger than 9 bytes are 50 cycles, but for REP MOVSW/MOVSD/MOVSQ they from 35 to 40 cycles - so REP MOVSB has larger startup costs; tests have shown that the overall performance is worst for REP MOVSW, not REP MOVSB on Nehalem and Westmere.
On Ivy Bridge, SkyLake and Kaby Lake, the results are the opposite for these instructions: REP MOVSB is faster than REP MOVSW/MOVSD/MOVSQ, albeit just slightly. On Ivy Bridge REP MOVSW is still a laggard, but on SkyLake and Kaby Lake REP MOVSW isn't worse than REP MOVSD/MOVSQ.
Please note that I have presented test results for both SkyLake and Kaby Lake, taken from the instaltx64 site just for the sake of confirmation - these architectures have the same cycle-per-instruction data.
Conclusion: you may use MOVSD/MOVSQ for very large memory blocks since it produces sufficient results on all Intel microarchitectures from Yohan to Kaby Lake. Although, on Yonan architectures and earlier, SSE copy may produce better results than REP MOVSD, but, for the sake of universality, REP MOVSD is preferred. Besides that, REP MOVS* may internally use different algorithms to work with cache, which is not available for normal instructions.
As about REP MOVSB for very small strings (less than 9 bytes or less than 4 bytes) - I would not even had recommended it. On the Kaby Lake, a single MOVSB
even without REP
is 4 cycles, on Yohan it is 5 cycles. Depending on context, you can do better just with normal MOVs.
The startup costs does not increase with size increase, as you have written. It is the latency of the overall instruction to complete the whole sequence of bytes that is increased - which is quite obvioius - more bytes you need to copy, more cycles it take, i.e. the overall latency, not just the startup cost. Intel did not disclose the startup cost for small strings, it did only specify for string of 76 bytes and more, for Nehalem. For example, take this data about the Nehalem:
- The latency for MOVSB, is 9 cycles if ECX < 4. So, it means that it takes exactly 9 cycles to copy any string as soon as this string has 1 byte or 2 bytes or 3 bytes. This is not that bad – for example if you need to copy a tail and you don’t want to use orverlapping stores. Just 9 cycles to determine the size (between 1 and 3) and actually copy the data – it is hard to achieve this with normal instructions and all this branching – and for a 3-byte copy, if you didn’t copy previous data, you will have to use 2 loads and 2 stores (word+byte), and since we have at most one store unit, we wont’ do that much faster with normal MOV instructions.
- Intel is silent on what latency has REP MOVSB if ECX is between 4 and 9
- Short string (ECX <= 12): the latency of REP MOVSW/MOVSD/MOVSQ is about 20 cycles to copy the whole string – not just the startup cost of 20 cycles. So it takes about 20 cycles to copy the whole string of <= 12 bytes, thus we have a higher thoughoutput rate per byte than with REP MOVSB with ECX < 4.
- ECX >= 76 with REP MOVSD/MOVSQ – yes, here we DO have startup cost of 40 cycles, but, this is more than reasonable, since we later use copy each 64 bytes of data at just 4 cycles. I’m not an Intel engineer authorized to reply WHY there is a startup costs, but I suppose that it is because for these strings, REP MOVS* uses (according to to Andy Glew's comments on an answer to Why are complicated memcpy/memset superior? from the Peter Cordes’ answer) a cache protocol feature that is not available to regular code. And there comes an explanation at this quote: „The large overhead for choosing and setting up the right method is mainly due to the lack of microcode branch prediction”. There has also been an interesting note that Pentium Pro (P6) in 1996 implemented REP MOVS* with 64 bit microcode loads and stores and a no-RFO cache protocol - they did not violate memory ordering, unlike ERMSB in Ivy Bridge.
Note that only rep movs
and rep stos
are fast. repe/ne
cmps
and scas
on current CPUs only loop 1 element at a time. (https://agner.org/optimize/ has some perf numbers, like 2 cycles per RCX count for repe cmpsb
). They still have some microcode startup overhead, though.
The rep movs
microcode has several strategies to choose from. If the src and dest don't overlap closely, the microcoded loop can transfer in 64b chunks larger. (This is the so-called "fast strings" feature introduced with P6 and occasionally re-tuned for later CPUs that support wider loads/stores). But if dest is only one byte from src, rep movs
has to produce the exact same result you'd get from that many separate movs
instructions.
So the microcode has to check for overlap, and probably for alignment (of src and dest separately, or relative alignment). It probably also chooses something based on small/medium/large counter values.
According to Andy Glew's comments on an answer to Why are complicated memcpy/memset superior?, conditional branches in microcode aren't subject to branch-prediction. So there's a significant penalty in startup cycles if the default not-taken path isn't the one actually taken, even for a loop that uses the same rep movs
with the same alignment and size.
He supervised the initial rep
string implementation in P6, so he should know. :)
REP MOVS uses a cache protocol feature that is not available to regular code. Basically like SSE streaming stores, but in a manner that is compatible with normal memory ordering rules, etc. // The "large overhead for choosing and setting up the right method" is mainly due to the lack of microcode branch prediction. I have long wished that I had implemented REP MOVS using a hardware state machine rather than microcode, which could have completely eliminated the overhead.
By the way, I have long said that one of the things that hardware can do better/faster than software is complex multiway branches.
Intel x86 have had "fast strings" since the Pentium Pro (P6) in 1996, which I supervised. The P6 fast strings took REP MOVSB and larger, and implemented them with 64 bit microcode loads and stores and a no-RFO cache protocol. They did not violate memory ordering, unlike ERMSB in iVB.
The big weakness of doing fast strings in microcode was (a) microcode branch mispredictions, and (b) the microcode fell out of tune with every generation, getting slower and slower until somebody got around to fixing it. Just like a library men copy falls out of tune. I suppose that it is possible that one of the missed opportunities was to use 128-bit loads and stores when they became available, and so on
In retrospect, I should have written a self-tuning infrastructure, to get reasonably good microcode on every generation. But that would not have helped use new, wider, loads and stores, when they became available. // The Linux kernel seems to have such an autotuning infrastructure, that is run on boot. // Overall, however, I advocate hardware state machines that can smoothly transition between modes, without incurring branch mispredictions. // It is debatable whether good microcode branch prediction would obviate this.
Based on this, my best guess at a specific answer is: the fast-path through the microcode (as many branches as possible actually take the default not-taken path) is the 15-cycle startup case, for intermediate lengths.
Since Intel doesn't publish the full details, black-box measurements of cycle counts for various sizes and alignments are the best we can do. Fortunately, that's all we need to make good choices. Intel's manual, and http://agner.org/optimize/, have good info on how to use rep movs
.
Fun fact: without ERMSB (new in IvB): rep movsb
is optimized for small-ish copies. It takes longer to start up than rep movsd
or rep movsq
for large (more than a couple hundred bytes, I think) copies, and even after that may not achieve the same throughput.
The optimal sequence for large aligned copies without ERMSB and without SSE/AVX (e.g. in kernel code) may be rep movsq
and then clean-up with something like an unaligned mov
that copies the last 8 bytes of the buffer, possibly overlapping with the last aligned chunk of what rep movsq
did. (basically use glibc's small-copy memcpy
strategy). But if the size might be smaller than 8 bytes, you need to branch unless it's safe to copy more bytes than needed. Or rep movsb
is an option for cleanup if small code-size matters more than performance. (rep
will copy 0 bytes if RCX = 0).
A SIMD vector loop is often at least slightly faster than rep movsb
even on CPUs with Enhanced Rep Move/Stos B. Especially if alignment isn't guaranteed. (Enhanced REP MOVSB for memcpy, and see also Intel's optimization manual. Links in the x86 tag wiki)
Further details: I think there's some discussion somewhere on SO about testing how rep movsb
affects out-of-order exec of surrounding instructions, how soon uops from later instructions can get into the pipeline. I think we found some info in an Intel patent that shed some light on the mechanism.
Microcode can use a kind of predicated load and store uop that lets it issue a bunch of uops without initially knowing the value of RCX. If it turns out RCX was a small value, some of those uops choose not to do anything.
I've done some testing of rep movsb
on Skylake. It seems consistent with that initial-burst mechanism: below a certain threshold of size like 96 bytes or something, IIRC performance was nearly constant for any size. (With small aligned buffers hot in L1d cache). I had rep movs
in a loop with an independent imul
dependency chain, testing that it can overlap execution.
But then there was a significant dropoff beyond that size, presumably when the microcode sequencer finds out that it needs to emit more copy uops. So I think when the rep movsb
microcoded-uop reaches the front of the IDQ, it gets the microcode sequencer to emit enough load + store uops for some fixed size, and a check to see if that was sufficient or if more are needed.
This is all from memory, I didn't re-test while updating this answer. If this doesn't match reality for anyone else, let me know and I'll check again.