Why does glibc's strlen need to be so complicated to run quickly?
You don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen
with some very questionable speed hacks and assumptions (that are not tested with assertions or mentioned in the comments):
unsigned long
is either 4 or 8 bytes- bytes are 8 bits
- a pointer can be cast to
unsigned long long
and notuintptr_t
- one can align the pointer simply by checking that the 2 or 3 lowest order bits are zero
- one can access a string as
unsigned long
s - one can read past the end of array without any ill effects.
What is more, a good compiler could even replace code written as
size_t stupid_strlen(const char s[]) {
size_t i;
for (i=0; s[i] != '\0'; i++)
;
return i;
}
(notice that it has to be a type compatible with size_t
) with an inlined version of the compiler builtin strlen
, or vectorize the code; but a compiler would be unlikely to be able to optimize the complex version.
The strlen
function is described by C11 7.24.6.3 as:
Description
- The
strlen
function computes the length of the string pointed to by s.Returns
- The
strlen
function returns the number of characters that precede the terminating null character.
Now, if the string pointed to by s
was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in
char *str = "hello world"; // or
char array[] = "hello world";
So really the only way in fully portable / standards compliant C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.
(As commenters have pointed out, when strict portability is too much of a burden, taking advantage of reasonable or known-safe assumptions is not always a bad thing. Especially in code that's part of one specific C implementation. But you have to understand the rules before knowing how/when you can bend them.)
The linked strlen
implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long
. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier. (In practice on some CPU architecture other than x86, a misaligned word or doubleword load will fault. C is not a portable assembly language, but this code is using it that way). It's also what makes it possible to read past the end of an object without risk of faulting on implementations where memory protection works in aligned blocks (e.g. 4kiB virtual memory pages).
Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int
), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.
The biggest problem is that in sizeof (unsigned long) - 1
times out of sizeof (unsigned long)
cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!
The code, even though used to implement strlen
in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen
- I renamed the function to the_strlen
here and added the following main
:
int main(void) {
char buf[12];
printf("%zu\n", the_strlen(fgets(buf, 12, stdin)));
}
The buffer is carefully sized so that it can hold exactly the hello world
string and the terminator. However on my 64-bit processor the unsigned long
is 8 bytes, so the access to the latter part would exceed this buffer.
If I now compile with -fsanitize=undefined
and -fsanitize=address
and run the resulting program, I get:
% ./a.out
hello world
=================================================================
==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
READ of size 8 at 0x7ffffe63a3f8 thread T0
#0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
#1 0x55fbec46b139 in main (.../a.out+0x2139)
#2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#3 0x55fbec46a949 in _start (.../a.out+0x1949)
Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
#0 0x55fbec46b07c in main (.../a.out+0x207c)
This frame has 1 object(s):
[32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
(longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
Shadow bytes around the buggy address:
0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==8355==ABORTING
i.e. bad things happened.
There's been a lot of (slightly or entirely) wrong guesses in comments about some details / background for this.
You're looking at glibc's optimized C fallback optimized implementation. (For ISAs that don't have a hand-written asm implementation). Or an old version of that code, which is still in the glibc source tree. https://code.woboq.org/userspace/glibc/string/strlen.c.html is a code-browser based on the current glibc git tree. Apparently it is still used by a few mainstream glibc targets, including MIPS. (Thanks @zwol).
On popular ISAs like x86 and ARM, glibc uses hand-written asm
So the incentive to change anything about this code is lower than you might think.
This bithack code (https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord) isn't what actually runs on your server/desktop/laptop/smartphone. It's better than a naive byte-at-a-time loop, but even this bithack is pretty bad compared to efficient asm for modern CPUs (especially x86 where AVX2 SIMD allows checking 32 bytes with a couple instructions, allowing 32 to 64 bytes per clock cycle in the main loop if data is hot in L1d cache on modern CPUs with 2/clock vector load and ALU throughput. i.e. for medium-sized strings where startup overhead doesn't dominate.)
glibc uses dynamic linking tricks to resolve strlen
to an optimal version for your CPU, so even within x86 there's an SSE2 version (16-byte vectors, baseline for x86-64) and an AVX2 version (32-byte vectors).
x86 has efficient data transfer between vector and general-purpose registers, which makes it uniquely(?) good for using SIMD to speed up functions on implicit-length strings where the loop control is data dependent. pcmpeqb
/ pmovmskb
makes it possible to testing 16 separate bytes at a time.
glibc has an AArch64 version like that using AdvSIMD, and a version for AArch64 CPUs where vector->GP registers stalls the pipeline, so it does actually use this bithack. But uses count-leading-zeros to find the byte-within-register once it gets a hit, and takes advantage of AArch64's efficient unaligned accesses after checking for page-crossing.
Also related: Why is this code 6.5x slower with optimizations enabled? has some more details about what's fast vs. slow in x86 asm for strlen
with a large buffer and a simple asm implementation that might be good for gcc to know how to inline. (Some gcc versions unwisely inline rep scasb
which is very slow, or a 4-byte-at-a-time bithack like this. So GCC's inline-strlen recipe needs updating or disabling.)
Asm doesn't have C-style "undefined behaviour"; it's safe to access bytes in memory however you like, and an aligned load that includes any valid bytes can't fault. Memory protection happens with aligned-page granularity; aligned accesses narrower than that can't cross a page boundary. Is it safe to read past the end of a buffer within the same page on x86 and x64? The same reasoning applies to the machine-code that this C hack gets compilers to create for a stand-alone non-inline implementation of this function.
When a compiler emits code to call an unknown non-inline function, it has to assume that function modifies any/all global variables and any memory it might possibly have a pointer to. i.e. everything except locals that haven't had their address escape have to be in sync in memory across the call. This applies to functions written in asm, obviously, but also to library functions. If you don't enable link-time optimization, it even applies to separate translation units (source files).
Why this is safe as part of glibc but not otherwise.
The most important factor is that this strlen
can't inline into anything else. It's not safe for that; it contains strict-aliasing UB (reading char
data through an unsigned long*
). char*
is allowed to alias anything else but the reverse is not true.
This is a library function for an ahead-of-time compiled library (glibc). It won't get inlined with link-time-optimization into callers. This means it just has to compile to safe machine code for a stand-alone version of strlen
. It doesn't have to be portable / safe C.
The GNU C library only has to compile with GCC. Apparently it's not supported to compile it with clang or ICC, even though they support GNU extensions. GCC is an ahead-of-time compilers that turn a C source file into an object file of machine code. Not an interpreter, so unless it inlines at compile time, bytes in memory are just bytes in memory. i.e. strict-aliasing UB isn't dangerous when the accesses with different types happen in different functions that don't inline into each other.
Remember that strlen
's behaviour is defined by the ISO C standard. That function name specifically is part of the implementation. Compilers like GCC even treat the name as a built-in function unless you use -fno-builtin-strlen
, so strlen("foo")
can be a compile-time constant 3
. The definition in the library is only used when gcc decides to actually emit a call to it instead of inlining its own recipe or something.
When UB isn't visible to the compiler at compile time, you get sane machine code. The machine code has to work for the no-UB case, and even if you wanted to, there's no way for the asm to detect what types the caller used to put data into the pointed-to memory.
Glibc is compiled to a stand-alone static or dynamic library that can't inline with link-time optimization. glibc's build scripts don't create "fat" static libraries containing machine code + gcc GIMPLE internal representation for link-time optimization when inlining into a program. (i.e. libc.a
won't participate in -flto
link-time optimization into the main program.) Building glibc that way would be potentially unsafe on targets that actually use this .c
.
In fact as @zwol comments, LTO can't be used when building glibc itself, because of "brittle" code like this which could break if inlining between glibc source files was possible. (There are some internal uses of strlen
, e.g. maybe as part of the printf
implementation)
This strlen
makes some assumptions:
CHAR_BIT
is a multiple of 8. True on all GNU systems. POSIX 2001 even guaranteesCHAR_BIT == 8
. (This looks safe for systems withCHAR_BIT= 16
or32
, like some DSPs; the unaligned-prologue loop will always run 0 iterations ifsizeof(long) = sizeof(char) = 1
because every pointer is always aligned andp & sizeof(long)-1
is always zero.) But if you had a non-ASCII character set where chars are 9 or 12 bits wide,0x8080...
is the wrong pattern.- (maybe)
unsigned long
is 4 or 8 bytes. Or maybe it would actually work for any size ofunsigned long
up to 8, and it uses anassert()
to check for that.
Those two aren't possible UB, they're just non-portability to some C implementations. This code is (or was) part of the C implementation on platforms where it does work, so that's fine.
The next assumption is potential C UB:
- An aligned load that contains any valid bytes can't fault, and is safe as long as you ignore the bytes outside the object you actually want. (True in asm on every GNU systems, and on all normal CPUs because memory protection happens with aligned-page granularity. Is it safe to read past the end of a buffer within the same page on x86 and x64? safe in C when the UB isn't visible at compile time. Without inlining, this is the case here. The compiler can't prove that reading past the first
0
is UB; it could be a Cchar[]
array containing{1,2,0,3}
for example)
That last point is what makes it safe to read past the end of a C object here. That is pretty much safe even when inlining with current compilers because I think they don't currently treat that implying a path of execution is unreachable. But anyway, the strict aliasing is already a showstopper if you ever let this inline.
Then you'd have problems like the Linux kernel's old unsafe memcpy
CPP macro that used pointer-casting to unsigned long
(gcc, strict-aliasing, and horror stories). (Modern Linux compiles with -fno-strict-aliasing
instead of being careful with may_alias
attributes.)
This strlen
dates back to the era when you could get away with stuff like that in general; it used to be pretty much safe before GCC3, even without an "only when not inlining" caveat.
UB that's only visible when looking across call/ret boundaries can't hurt us. (e.g. calling this on a char buf[]
instead of on an array of unsigned long[]
cast to a const char*
). Once the machine code is set in stone, it's just dealing with bytes in memory. A non-inline function call has to assume that the callee reads any/all memory.
Writing this safely, without strict-aliasing UB
The GCC type attribute may_alias
gives a type the same alias-anything treatment as char*
. (Suggested by @KonradBorowsk). GCC headers currently use it for x86 SIMD vector types like __m128i
so you can always safely do _mm_loadu_si128( (__m128i*)foo )
. (See Is `reinterpret_cast`ing between hardware SIMD vector pointer and the corresponding type an undefined behavior? for more details about what this does and doesn't mean.)
strlen(const char *char_ptr)
{
typedef unsigned long __attribute__((may_alias)) aliasing_ulong;
// handle unaligned startup somehow, e.g. check for page crossing then check an unaligned word
// else check single bytes until an alignment boundary.
aliasing_ulong *longword_ptr = (aliasing_ulong *)char_ptr;
for (;;) {
// alignment still required, but can safely alias anything including a char[]
unsigned long ulong = *longword_ptr++;
...
}
}
You can use aligned(1)
to express a type with alignof(T) = 1
.
typedef unsigned long __attribute__((may_alias, aligned(1))) unaligned_aliasing_ulong;
. This could be useful for the unaligned-startup part of strlen, if you don't just do char-at-a-time until the first alignment boundary. (The main loop needs to be aligned so you don't fault if the terminator is right before an unmapped page.)
A portable way to express an aliasing load in ISO is with memcpy
, which modern compilers do know how to inline as a single load instruction. e.g.
unsigned long longword;
memcpy(&longword, char_ptr, sizeof(longword));
char_ptr += sizeof(longword);
This also works for unaligned loads because memcpy
works as-if by char
-at-a-time access. But in practice modern compilers understand memcpy
very well.
The danger here is that if GCC doesn't know for sure that char_ptr
is word-aligned, it won't inline it on some platforms that might not support unaligned loads in asm. e.g. MIPS before MIPS64r6, or older ARM. If you got an actual function call to memcpy
just to load a word (and leave it in other memory), that would be a disaster. GCC can sometimes see when code aligns a pointer. Or after the char-at-a-time loop that reaches a ulong boundary you could use
p = __builtin_assume_aligned(p, sizeof(unsigned long));
This doesn't avoid the read-past-the-object possible UB, but with current GCC that's not dangerous in practice.
Why hand-optimized C source is necessary: current compilers aren't good enough
Hand-optimized asm can be even better when you want every last drop of performance for a widely-used standard library function. Especially for something like memcpy
, but also strlen
. In this case it wouldn't be much easier to use C with x86 intrinsics to take advantage of SSE2.
But here we're just talking about a naive vs. bithack C version without any ISA-specific features.
(I think we can take it as a given that strlen
is widely enough used that making it run as fast as possible is important. So the question becomes whether we can get efficient machine code from simpler source. No, we can't.)
Current GCC and clang are not capable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. (e.g. it has to be possible to check if the loop will run at least 16 iterations before running the first iteration.) e.g. autovectorizing memcpy is possible (explicit-length buffer) but not strcpy or strlen (implicit-length string), given current compilers.
That includes search loops, or any other loop with a data-dependent if()break
as well as a counter.
ICC (Intel's compiler for x86) can auto-vectorize some search loops, but still only makes naive byte-at-a-time asm for a simple / naive C strlen
like OpenBSD's libc uses. (Godbolt). (From @Peske's answer).
A hand-optimized libc strlen
is necessary for performance with current compilers. Going 1 byte at a time (with unrolling maybe 2 bytes per cycle on wide superscalar CPUs) is pathetic when main memory can keep up with about 8 bytes per cycle, and L1d cache can deliver 16 to 64 per cycle. (2x 32-byte loads per cycle on modern mainstream x86 CPUs since Haswell and Ryzen. Not counting AVX512 which can reduce clock speeds just for using 512-bit vectors; which is why glibc probably isn't in a hurry to add an AVX512 version. Although with 256-bit vectors, AVX512VL + BW masked compare into a mask and ktest
or kortest
could make strlen
more hyperthreading friendly by reducing its uops / iteration.)
I'm including non-x86 here, that's the "16 bytes". e.g. most AArch64 CPUs can do at least that, I think, and some certainly more. And some have enough execution throughput for strlen
to keep up with that load bandwidth.
Of course programs that work with large strings should usually keep track of lengths to avoid having to redo finding the length of implicit-length C strings very often. But short to medium length performance still benefits from hand-written implementations, and I'm sure some programs do end up using strlen on medium-length strings.
It is explained in the comments in the file you linked:
27 /* Return the length of the null-terminated string STR. Scan for
28 the null terminator quickly by testing four bytes at a time. */
and:
73 /* Instead of the traditional loop which tests each character,
74 we will test a longword at a time. The tricky part is testing
75 if *any of the four* bytes in the longword in question are zero. */
In C, it is possible to reason in detail about the efficiency.
It is less efficient to iterate through individual characters looking for a null than it is to test more than one byte at a time, as this code does.
The additional complexity comes from needing to ensure that the string under test is aligned in the right place to start testing more than one byte at a time (along a longword boundary, as described in the comments), and from needing to ensure that the assumptions about the sizes of the datatypes are not violated when the code is used.
In most (but not all) modern software development, this attention to efficiency detail is not necessary, or not worth the cost of extra code complexity.
One place where it does make sense to pay attention to efficiency like this is in standard libraries, like the example you linked.
If you want to read more about word boundaries, see this question, and this excellent wikipedia page
I also think that this answer above is a much clearer and more detailed discussion.