Why are memcpy() and memmove() faster than pointer increments?
memcpy
can copy more than one byte at once depending on the computer's architecture. Most modern computers can work with 32 bits or more in a single processor instruction.
From one example implementation:
00026 * For speedy copying, optimize the common case where both pointers 00027 * and the length are word-aligned, and copy word-at-a-time instead 00028 * of byte-at-a-time. Otherwise, copy by bytes.
Because memcpy uses word pointers instead of byte pointers, also the memcpy implementations are often written with SIMD instructions which makes it possible to shuffle 128 bits at a time.
SIMD instructions are assembly instructions that can perform the same operation on each element in a vector up to 16 bytes long. That includes load and store instructions.
Memory copy routines can be far more complicated and faster than a simple memory copy via pointers such as:
void simple_memory_copy(void* dst, void* src, unsigned int bytes)
{
unsigned char* b_dst = (unsigned char*)dst;
unsigned char* b_src = (unsigned char*)src;
for (int i = 0; i < bytes; ++i)
*b_dst++ = *b_src++;
}
Improvements
The first improvement one can make is to align one of the pointers on a word boundary (by word I mean native integer size, usually 32 bits/4 bytes, but can be 64 bits/8 bytes on newer architectures) and use word sized move/copy instructions. This requires using a byte to byte copy until a pointer is aligned.
void aligned_memory_copy(void* dst, void* src, unsigned int bytes)
{
unsigned char* b_dst = (unsigned char*)dst;
unsigned char* b_src = (unsigned char*)src;
// Copy bytes to align source pointer
while ((b_src & 0x3) != 0)
{
*b_dst++ = *b_src++;
bytes--;
}
unsigned int* w_dst = (unsigned int*)b_dst;
unsigned int* w_src = (unsigned int*)b_src;
while (bytes >= 4)
{
*w_dst++ = *w_src++;
bytes -= 4;
}
// Copy trailing bytes
if (bytes > 0)
{
b_dst = (unsigned char*)w_dst;
b_src = (unsigned char*)w_src;
while (bytes > 0)
{
*b_dst++ = *b_src++;
bytes--;
}
}
}
Different architectures will perform differently based on if the source or the destination pointer is appropriately aligned. For instance on an XScale processor I got better performance by aligning the destination pointer rather than the source pointer.
To further improve performance some loop unrolling can be done, so that more of the processor's registers are loaded with data and that means the load/store instructions can be interleaved and have their latency hidden by additional instructions (such as loop counting etc). The benefit this brings varies quite a bit by the processor, since load/store instruction latencies can be quite different.
At this stage the code ends up being written in Assembly rather than C (or C++) since you need to manually place the load and store instructions to get maximum benefit of latency hiding and throughput.
Generally a whole cache line of data should be copied in one iteration of the unrolled loop.
Which brings me to the next improvement, adding pre-fetching. These are special instructions that tell the processor's cache system to load specific parts of memory into its cache. Since there is a delay between issuing the instruction and having the cache line filled, the instructions need to be placed in such a way so that the data is available when just as it is to be copied, and no sooner/later.
This means putting prefetch instructions at the start of the function as well as inside the main copy loop. With the prefetch instructions in the middle of the copy loop fetching data that will be copied in several iterations time.
I can't remember, but it may also be beneficial to prefetch the destination addresses as well as the source ones.
Factors
The main factors that affect how fast memory can be copied are:
- The latency between the processor, its caches, and main memory.
- The size and structure of the processor's cache lines.
- The processor's memory move/copy instructions (latency, throughput, register size, etc).
So if you want to write an efficient and fast memory cope routine you'll need to know quite a lot about the processor and architecture you are writing for. Suffice to say, unless you're writing on some embedded platform it would be much easier to just use the built in memory copy routines.
You can implement memcpy()
using any of the following techniques, some dependent on your architecture for performance gains, and they will all be much faster than your code:
Use larger units, such as 32-bit words instead of bytes. You can also (or may have to) deal with alignment here as well. You can't go reading/writing a 32-bit word to a odd memory location for example on some platforms, and on other platforms you pay a massive performance penalty. To fix this, the address has to be a unit divisible by 4. You can take this up to 64-bits for 64bit CPUs, or even higher using SIMD (Single instruction, multiple data) instructions (MMX, SSE, etc.)
You can use special CPU instructions that your compiler may not be able to optimize from C. For example, on a 80386, you can use the "rep" prefix instruction + "movsb" instruction to move N bytes dictated by placing N in the count register. Good compilers will just do this for you, but you may be on a platform that lacks a good compiler. Note, that example tends to be a bad demonstration of speed, but combined with alignment + larger unit instructions, it can be faster than mostly everything else on certain CPUs.
Loop unrolling -- branches can be quite expensive on some CPUs, so unrolling the loops can lower the number of branches. This is also a good technique for combining with SIMD instructions and very large sized units.
For example, http://www.agner.org/optimize/#asmlib has a memcpy
implementation that beats most out there (by a very tiny amount). If you read the source code, it will be full of tons of inlined assembly code that pulls off all of the above three techniques, choosing which of those techniques based on what CPU you are running on.
Note, there are similar optimizations that can be made for finding bytes in a buffer too. strchr()
and friends will often by faster than your hand rolled equivalent. This is especially true for .NET and Java. For example, in .NET, the built-in String.IndexOf()
is much faster than even a Boyer–Moore string search, because it uses the above optimization techniques.