Why does glibc's strlen need to be so complicated to run fast?gcc, strict-aliasing, and horror storiesWhy is this code 6.5x slower with optimizations enabled?Why does CPU access memory on a word boundary?Is it safe to read past the end of a buffer within the same page on x86 and x64?In C, is it guaranteed that the array start address is smaller than the other elements' addresses?Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior?Why is volatile needed in C?Why is C so fast, and why aren't other languages as fast or faster?Why does printf not flush after the call unless a newline is in the format string?strlen: how does it work?Why does sizeof(x++) not increment x?Why does the order of the loops affect performance when iterating over a 2D array?Why does the C preprocessor interpret the word “linux” as the constant “1”?Why does ENOENT mean “No such file or directory”?Is the size of every C data type guaranteed to be an integer multiple of bytes?

Why this brute force attack doesn't reduce all cryptographic hash functions' security bits against collision attacks to N/3?

Defending Castle from Zombies

Why was this commercial plane highly delayed mid-flight?

How to force GCC to assume that a floating-point expression is non-negative?

Did anybody find out it was Anakin who blew up the command center?

How many petaflops does it take to land on the moon? What does Artemis need with an Aitken?

Can a character use multiple reactions in response to the same trigger?

What is the name of this plot that has rows with two connected dots?

Why did Lucius make a deal out of Buckbeak hurting Draco but not about Draco being turned into a ferret?

Why is there not a willingness from the world to step in between Pakistan and India?

How can I download a file from a host I can only SSH to through another host?

How do I insert two edge loops equally spaced from the edges?

Why didn't Doc believe Marty was from the future?

Book featuring a child learning from a crowdsourced AI book

Why does a sticker slowly peel off, but if it is pulled quickly it tears?

Can I lend at the federal funds rate?

A first "Hangman" game in Python

Can an object tethered to a spaceship be pulled out of event horizon?

Which meaning of "must" does the Slow spell use?

Count the number of triangles

Federal Pacific 200a main panel problem with oversized 100a 2pole breaker

Is this password scheme legit?

Is the Amazon rainforest the "world's lungs"?

Are (c#) dictionaries an Anti Pattern?



Why does glibc's strlen need to be so complicated to run fast?


gcc, strict-aliasing, and horror storiesWhy is this code 6.5x slower with optimizations enabled?Why does CPU access memory on a word boundary?Is it safe to read past the end of a buffer within the same page on x86 and x64?In C, is it guaranteed that the array start address is smaller than the other elements' addresses?Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior?Why is volatile needed in C?Why is C so fast, and why aren't other languages as fast or faster?Why does printf not flush after the call unless a newline is in the format string?strlen: how does it work?Why does sizeof(x++) not increment x?Why does the order of the loops affect performance when iterating over a 2D array?Why does the C preprocessor interpret the word “linux” as the constant “1”?Why does ENOENT mean “No such file or directory”?Is the size of every C data type guaranteed to be an integer multiple of bytes?






.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;








192















I was looking through the strlen code here and I was wondering if the optimizations used in the code are really needed? For example, why wouldn't something like the following work equally good or better?



unsigned long strlen(char s[]) 
unsigned long i;
for (i = 0; s[i] != ''; i++)
continue;
return i;



Isn't simpler code better and/or easier for the compiler to optimize?



The code of strlen on the page behind the link looks like this:




/* Copyright (C) 1991, 1993, 1997, 2000, 2003 Free Software Foundation, Inc.
This file is part of the GNU C Library.
Written by Torbjorn Granlund (tege@sics.se),
with help from Dan Sahlin (dan@sics.se);
commentary by Jim Blandy (jimb@ai.mit.edu).

The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.

The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307 USA. */

#include <string.h>
#include <stdlib.h>

#undef strlen

/* Return the length of the null-terminated string STR. Scan for
the null terminator quickly by testing four bytes at a time. */
size_t
strlen (str)
const char *str;

const char *char_ptr;
const unsigned long int *longword_ptr;
unsigned long int longword, magic_bits, himagic, lomagic;

/* Handle the first few characters by reading one character at a time.
Do this until CHAR_PTR is aligned on a longword boundary. */
for (char_ptr = str; ((unsigned long int) char_ptr
& (sizeof (longword) - 1)) != 0;
++char_ptr)
if (*char_ptr == '')
return char_ptr - str;

/* All these elucidatory comments refer to 4-byte longwords,
but the theory applies equally well to 8-byte longwords. */

longword_ptr = (unsigned long int *) char_ptr;

/* Bits 31, 24, 16, and 8 of this number are zero. Call these bits
the "holes." Note that there is a hole just to the left of
each byte, with an extra at the end:

bits: 01111110 11111110 11111110 11111111
bytes: AAAAAAAA BBBBBBBB CCCCCCCC DDDDDDDD

The 1-bits make sure that carries propagate to the next 0-bit.
The 0-bits provide holes for carries to fall into. */
magic_bits = 0x7efefeffL;
himagic = 0x80808080L;
lomagic = 0x01010101L;
if (sizeof (longword) > 4)

/* 64-bit version of the magic. */
/* Do the shift in two steps to avoid a warning if long has 32 bits. */
magic_bits = ((0x7efefefeL << 16) << 16)
if (sizeof (longword) > 8)
abort ();

/* Instead of the traditional loop which tests each character,
we will test a longword at a time. The tricky part is testing
if *any of the four* bytes in the longword in question are zero. */
for (;;)

/* We tentatively exit the loop if adding MAGIC_BITS to
LONGWORD fails to change any of the hole bits of LONGWORD.

1) Is this safe? Will it catch all the zero bytes?
Suppose there is a byte with all zeros. Any carry bits
propagating from its left will fall into the hole at its
least significant bit and stop. Since there will be no
carry from its most significant bit, the LSB of the
byte to the left will be unchanged, and the zero will be
detected.

2) Is this worthwhile? Will it ignore everything except
zero bytes? Suppose every byte of LONGWORD has a bit set
somewhere. There will be a carry into bit 8. If bit 8
is set, this will carry into bit 16. If bit 8 is clear,
one of bits 9-15 must be set, so there will be a carry
into bit 16. Similarly, there will be a carry into bit
24. If one of bits 24-30 is set, there will be a carry
into bit 31, so all of the hole bits will be changed.

The one misfire occurs when bits 24-30 are clear and bit
31 is set; in this case, the hole at bit 31 is not
changed. If we had access to the processor carry flag,
we could close this loophole by putting the fourth hole
at bit 32!

So it ignores everything except 128's, when they're aligned
properly. */

longword = *longword_ptr++;

if (
#if 0
/* Add MAGIC_BITS to LONGWORD. */
(((longword + magic_bits)

/* Set those bits that were unchanged by the addition. */
^ ~longword)

/* Look at only the hole bits. If any of the hole bits
are unchanged, most likely one of the bytes was a
zero. */
& ~magic_bits)
#else
((longword - lomagic) & himagic)
#endif
!= 0)

/* Which of the bytes was the zero? If none of them were, it was
a misfire; continue the search. */

const char *cp = (const char *) (longword_ptr - 1);

if (cp[0] == 0)
return cp - str;
if (cp[1] == 0)
return cp - str + 1;
if (cp[2] == 0)
return cp - str + 2;
if (cp[3] == 0)
return cp - str + 3;
if (sizeof (longword) > 4)

if (cp[4] == 0)
return cp - str + 4;
if (cp[5] == 0)
return cp - str + 5;
if (cp[6] == 0)
return cp - str + 6;
if (cp[7] == 0)
return cp - str + 7;




libc_hidden_builtin_def (strlen)



Why does this version run fast?



Isn't it doing a lot of work? (Editor's note: yes more work per iteration, but it's amortized over 4 or 8 bytes checked per iteration)










share|improve this question









New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 1





    Comments are not for extended discussion; this conversation has been moved to chat.

    – Samuel Liew
    15 hours ago






  • 8





    For future reference, the official source repository for GNU libc is at <sourceware.org/git/?p=glibc.git>. <sourceware.org/git/?p=glibc.git;a=blob;f=string/…> does indeed show code similar to the above; however,a hand-written assembly language implementation from the sysdeps directory will be used instead, on most of glibc's supported architectures (the most commonly used architecture that doesn't have a replacement is MIPS).

    – zwol
    14 hours ago












  • Voting to close this as primarily opinion-based; "Are xxx really needed in xxx?" is subjective to people's opinions.

    – JL2210
    5 hours ago












  • @JL2210: Good point, fixed the title to capture the spirit of the question in a title that doesn't sound like it's wondering if performance is needed, just why we need these optimizations to get performance.

    – Peter Cordes
    40 mins ago

















192















I was looking through the strlen code here and I was wondering if the optimizations used in the code are really needed? For example, why wouldn't something like the following work equally good or better?



unsigned long strlen(char s[]) 
unsigned long i;
for (i = 0; s[i] != ''; i++)
continue;
return i;



Isn't simpler code better and/or easier for the compiler to optimize?



The code of strlen on the page behind the link looks like this:




/* Copyright (C) 1991, 1993, 1997, 2000, 2003 Free Software Foundation, Inc.
This file is part of the GNU C Library.
Written by Torbjorn Granlund (tege@sics.se),
with help from Dan Sahlin (dan@sics.se);
commentary by Jim Blandy (jimb@ai.mit.edu).

The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.

The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307 USA. */

#include <string.h>
#include <stdlib.h>

#undef strlen

/* Return the length of the null-terminated string STR. Scan for
the null terminator quickly by testing four bytes at a time. */
size_t
strlen (str)
const char *str;

const char *char_ptr;
const unsigned long int *longword_ptr;
unsigned long int longword, magic_bits, himagic, lomagic;

/* Handle the first few characters by reading one character at a time.
Do this until CHAR_PTR is aligned on a longword boundary. */
for (char_ptr = str; ((unsigned long int) char_ptr
& (sizeof (longword) - 1)) != 0;
++char_ptr)
if (*char_ptr == '')
return char_ptr - str;

/* All these elucidatory comments refer to 4-byte longwords,
but the theory applies equally well to 8-byte longwords. */

longword_ptr = (unsigned long int *) char_ptr;

/* Bits 31, 24, 16, and 8 of this number are zero. Call these bits
the "holes." Note that there is a hole just to the left of
each byte, with an extra at the end:

bits: 01111110 11111110 11111110 11111111
bytes: AAAAAAAA BBBBBBBB CCCCCCCC DDDDDDDD

The 1-bits make sure that carries propagate to the next 0-bit.
The 0-bits provide holes for carries to fall into. */
magic_bits = 0x7efefeffL;
himagic = 0x80808080L;
lomagic = 0x01010101L;
if (sizeof (longword) > 4)

/* 64-bit version of the magic. */
/* Do the shift in two steps to avoid a warning if long has 32 bits. */
magic_bits = ((0x7efefefeL << 16) << 16)
if (sizeof (longword) > 8)
abort ();

/* Instead of the traditional loop which tests each character,
we will test a longword at a time. The tricky part is testing
if *any of the four* bytes in the longword in question are zero. */
for (;;)

/* We tentatively exit the loop if adding MAGIC_BITS to
LONGWORD fails to change any of the hole bits of LONGWORD.

1) Is this safe? Will it catch all the zero bytes?
Suppose there is a byte with all zeros. Any carry bits
propagating from its left will fall into the hole at its
least significant bit and stop. Since there will be no
carry from its most significant bit, the LSB of the
byte to the left will be unchanged, and the zero will be
detected.

2) Is this worthwhile? Will it ignore everything except
zero bytes? Suppose every byte of LONGWORD has a bit set
somewhere. There will be a carry into bit 8. If bit 8
is set, this will carry into bit 16. If bit 8 is clear,
one of bits 9-15 must be set, so there will be a carry
into bit 16. Similarly, there will be a carry into bit
24. If one of bits 24-30 is set, there will be a carry
into bit 31, so all of the hole bits will be changed.

The one misfire occurs when bits 24-30 are clear and bit
31 is set; in this case, the hole at bit 31 is not
changed. If we had access to the processor carry flag,
we could close this loophole by putting the fourth hole
at bit 32!

So it ignores everything except 128's, when they're aligned
properly. */

longword = *longword_ptr++;

if (
#if 0
/* Add MAGIC_BITS to LONGWORD. */
(((longword + magic_bits)

/* Set those bits that were unchanged by the addition. */
^ ~longword)

/* Look at only the hole bits. If any of the hole bits
are unchanged, most likely one of the bytes was a
zero. */
& ~magic_bits)
#else
((longword - lomagic) & himagic)
#endif
!= 0)

/* Which of the bytes was the zero? If none of them were, it was
a misfire; continue the search. */

const char *cp = (const char *) (longword_ptr - 1);

if (cp[0] == 0)
return cp - str;
if (cp[1] == 0)
return cp - str + 1;
if (cp[2] == 0)
return cp - str + 2;
if (cp[3] == 0)
return cp - str + 3;
if (sizeof (longword) > 4)

if (cp[4] == 0)
return cp - str + 4;
if (cp[5] == 0)
return cp - str + 5;
if (cp[6] == 0)
return cp - str + 6;
if (cp[7] == 0)
return cp - str + 7;




libc_hidden_builtin_def (strlen)



Why does this version run fast?



Isn't it doing a lot of work? (Editor's note: yes more work per iteration, but it's amortized over 4 or 8 bytes checked per iteration)










share|improve this question









New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
















  • 1





    Comments are not for extended discussion; this conversation has been moved to chat.

    – Samuel Liew
    15 hours ago






  • 8





    For future reference, the official source repository for GNU libc is at <sourceware.org/git/?p=glibc.git>. <sourceware.org/git/?p=glibc.git;a=blob;f=string/…> does indeed show code similar to the above; however,a hand-written assembly language implementation from the sysdeps directory will be used instead, on most of glibc's supported architectures (the most commonly used architecture that doesn't have a replacement is MIPS).

    – zwol
    14 hours ago












  • Voting to close this as primarily opinion-based; "Are xxx really needed in xxx?" is subjective to people's opinions.

    – JL2210
    5 hours ago












  • @JL2210: Good point, fixed the title to capture the spirit of the question in a title that doesn't sound like it's wondering if performance is needed, just why we need these optimizations to get performance.

    – Peter Cordes
    40 mins ago













192












192








192


43






I was looking through the strlen code here and I was wondering if the optimizations used in the code are really needed? For example, why wouldn't something like the following work equally good or better?



unsigned long strlen(char s[]) 
unsigned long i;
for (i = 0; s[i] != ''; i++)
continue;
return i;



Isn't simpler code better and/or easier for the compiler to optimize?



The code of strlen on the page behind the link looks like this:




/* Copyright (C) 1991, 1993, 1997, 2000, 2003 Free Software Foundation, Inc.
This file is part of the GNU C Library.
Written by Torbjorn Granlund (tege@sics.se),
with help from Dan Sahlin (dan@sics.se);
commentary by Jim Blandy (jimb@ai.mit.edu).

The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.

The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307 USA. */

#include <string.h>
#include <stdlib.h>

#undef strlen

/* Return the length of the null-terminated string STR. Scan for
the null terminator quickly by testing four bytes at a time. */
size_t
strlen (str)
const char *str;

const char *char_ptr;
const unsigned long int *longword_ptr;
unsigned long int longword, magic_bits, himagic, lomagic;

/* Handle the first few characters by reading one character at a time.
Do this until CHAR_PTR is aligned on a longword boundary. */
for (char_ptr = str; ((unsigned long int) char_ptr
& (sizeof (longword) - 1)) != 0;
++char_ptr)
if (*char_ptr == '')
return char_ptr - str;

/* All these elucidatory comments refer to 4-byte longwords,
but the theory applies equally well to 8-byte longwords. */

longword_ptr = (unsigned long int *) char_ptr;

/* Bits 31, 24, 16, and 8 of this number are zero. Call these bits
the "holes." Note that there is a hole just to the left of
each byte, with an extra at the end:

bits: 01111110 11111110 11111110 11111111
bytes: AAAAAAAA BBBBBBBB CCCCCCCC DDDDDDDD

The 1-bits make sure that carries propagate to the next 0-bit.
The 0-bits provide holes for carries to fall into. */
magic_bits = 0x7efefeffL;
himagic = 0x80808080L;
lomagic = 0x01010101L;
if (sizeof (longword) > 4)

/* 64-bit version of the magic. */
/* Do the shift in two steps to avoid a warning if long has 32 bits. */
magic_bits = ((0x7efefefeL << 16) << 16)
if (sizeof (longword) > 8)
abort ();

/* Instead of the traditional loop which tests each character,
we will test a longword at a time. The tricky part is testing
if *any of the four* bytes in the longword in question are zero. */
for (;;)

/* We tentatively exit the loop if adding MAGIC_BITS to
LONGWORD fails to change any of the hole bits of LONGWORD.

1) Is this safe? Will it catch all the zero bytes?
Suppose there is a byte with all zeros. Any carry bits
propagating from its left will fall into the hole at its
least significant bit and stop. Since there will be no
carry from its most significant bit, the LSB of the
byte to the left will be unchanged, and the zero will be
detected.

2) Is this worthwhile? Will it ignore everything except
zero bytes? Suppose every byte of LONGWORD has a bit set
somewhere. There will be a carry into bit 8. If bit 8
is set, this will carry into bit 16. If bit 8 is clear,
one of bits 9-15 must be set, so there will be a carry
into bit 16. Similarly, there will be a carry into bit
24. If one of bits 24-30 is set, there will be a carry
into bit 31, so all of the hole bits will be changed.

The one misfire occurs when bits 24-30 are clear and bit
31 is set; in this case, the hole at bit 31 is not
changed. If we had access to the processor carry flag,
we could close this loophole by putting the fourth hole
at bit 32!

So it ignores everything except 128's, when they're aligned
properly. */

longword = *longword_ptr++;

if (
#if 0
/* Add MAGIC_BITS to LONGWORD. */
(((longword + magic_bits)

/* Set those bits that were unchanged by the addition. */
^ ~longword)

/* Look at only the hole bits. If any of the hole bits
are unchanged, most likely one of the bytes was a
zero. */
& ~magic_bits)
#else
((longword - lomagic) & himagic)
#endif
!= 0)

/* Which of the bytes was the zero? If none of them were, it was
a misfire; continue the search. */

const char *cp = (const char *) (longword_ptr - 1);

if (cp[0] == 0)
return cp - str;
if (cp[1] == 0)
return cp - str + 1;
if (cp[2] == 0)
return cp - str + 2;
if (cp[3] == 0)
return cp - str + 3;
if (sizeof (longword) > 4)

if (cp[4] == 0)
return cp - str + 4;
if (cp[5] == 0)
return cp - str + 5;
if (cp[6] == 0)
return cp - str + 6;
if (cp[7] == 0)
return cp - str + 7;




libc_hidden_builtin_def (strlen)



Why does this version run fast?



Isn't it doing a lot of work? (Editor's note: yes more work per iteration, but it's amortized over 4 or 8 bytes checked per iteration)










share|improve this question









New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I was looking through the strlen code here and I was wondering if the optimizations used in the code are really needed? For example, why wouldn't something like the following work equally good or better?



unsigned long strlen(char s[]) 
unsigned long i;
for (i = 0; s[i] != ''; i++)
continue;
return i;



Isn't simpler code better and/or easier for the compiler to optimize?



The code of strlen on the page behind the link looks like this:




/* Copyright (C) 1991, 1993, 1997, 2000, 2003 Free Software Foundation, Inc.
This file is part of the GNU C Library.
Written by Torbjorn Granlund (tege@sics.se),
with help from Dan Sahlin (dan@sics.se);
commentary by Jim Blandy (jimb@ai.mit.edu).

The GNU C Library is free software; you can redistribute it and/or
modify it under the terms of the GNU Lesser General Public
License as published by the Free Software Foundation; either
version 2.1 of the License, or (at your option) any later version.

The GNU C Library is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
Lesser General Public License for more details.

You should have received a copy of the GNU Lesser General Public
License along with the GNU C Library; if not, write to the Free
Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
02111-1307 USA. */

#include <string.h>
#include <stdlib.h>

#undef strlen

/* Return the length of the null-terminated string STR. Scan for
the null terminator quickly by testing four bytes at a time. */
size_t
strlen (str)
const char *str;

const char *char_ptr;
const unsigned long int *longword_ptr;
unsigned long int longword, magic_bits, himagic, lomagic;

/* Handle the first few characters by reading one character at a time.
Do this until CHAR_PTR is aligned on a longword boundary. */
for (char_ptr = str; ((unsigned long int) char_ptr
& (sizeof (longword) - 1)) != 0;
++char_ptr)
if (*char_ptr == '')
return char_ptr - str;

/* All these elucidatory comments refer to 4-byte longwords,
but the theory applies equally well to 8-byte longwords. */

longword_ptr = (unsigned long int *) char_ptr;

/* Bits 31, 24, 16, and 8 of this number are zero. Call these bits
the "holes." Note that there is a hole just to the left of
each byte, with an extra at the end:

bits: 01111110 11111110 11111110 11111111
bytes: AAAAAAAA BBBBBBBB CCCCCCCC DDDDDDDD

The 1-bits make sure that carries propagate to the next 0-bit.
The 0-bits provide holes for carries to fall into. */
magic_bits = 0x7efefeffL;
himagic = 0x80808080L;
lomagic = 0x01010101L;
if (sizeof (longword) > 4)

/* 64-bit version of the magic. */
/* Do the shift in two steps to avoid a warning if long has 32 bits. */
magic_bits = ((0x7efefefeL << 16) << 16)
if (sizeof (longword) > 8)
abort ();

/* Instead of the traditional loop which tests each character,
we will test a longword at a time. The tricky part is testing
if *any of the four* bytes in the longword in question are zero. */
for (;;)

/* We tentatively exit the loop if adding MAGIC_BITS to
LONGWORD fails to change any of the hole bits of LONGWORD.

1) Is this safe? Will it catch all the zero bytes?
Suppose there is a byte with all zeros. Any carry bits
propagating from its left will fall into the hole at its
least significant bit and stop. Since there will be no
carry from its most significant bit, the LSB of the
byte to the left will be unchanged, and the zero will be
detected.

2) Is this worthwhile? Will it ignore everything except
zero bytes? Suppose every byte of LONGWORD has a bit set
somewhere. There will be a carry into bit 8. If bit 8
is set, this will carry into bit 16. If bit 8 is clear,
one of bits 9-15 must be set, so there will be a carry
into bit 16. Similarly, there will be a carry into bit
24. If one of bits 24-30 is set, there will be a carry
into bit 31, so all of the hole bits will be changed.

The one misfire occurs when bits 24-30 are clear and bit
31 is set; in this case, the hole at bit 31 is not
changed. If we had access to the processor carry flag,
we could close this loophole by putting the fourth hole
at bit 32!

So it ignores everything except 128's, when they're aligned
properly. */

longword = *longword_ptr++;

if (
#if 0
/* Add MAGIC_BITS to LONGWORD. */
(((longword + magic_bits)

/* Set those bits that were unchanged by the addition. */
^ ~longword)

/* Look at only the hole bits. If any of the hole bits
are unchanged, most likely one of the bytes was a
zero. */
& ~magic_bits)
#else
((longword - lomagic) & himagic)
#endif
!= 0)

/* Which of the bytes was the zero? If none of them were, it was
a misfire; continue the search. */

const char *cp = (const char *) (longword_ptr - 1);

if (cp[0] == 0)
return cp - str;
if (cp[1] == 0)
return cp - str + 1;
if (cp[2] == 0)
return cp - str + 2;
if (cp[3] == 0)
return cp - str + 3;
if (sizeof (longword) > 4)

if (cp[4] == 0)
return cp - str + 4;
if (cp[5] == 0)
return cp - str + 5;
if (cp[6] == 0)
return cp - str + 6;
if (cp[7] == 0)
return cp - str + 7;




libc_hidden_builtin_def (strlen)



Why does this version run fast?



Isn't it doing a lot of work? (Editor's note: yes more work per iteration, but it's amortized over 4 or 8 bytes checked per iteration)







c optimization glibc portability strlen






share|improve this question









New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.










share|improve this question









New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








share|improve this question




share|improve this question








edited 42 mins ago









Peter Cordes

156k22 gold badges249 silver badges399 bronze badges




156k22 gold badges249 silver badges399 bronze badges






New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.








asked 2 days ago









SharedShared

4972 gold badges3 silver badges9 bronze badges




4972 gold badges3 silver badges9 bronze badges




New contributor



Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




New contributor




Shared is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • 1





    Comments are not for extended discussion; this conversation has been moved to chat.

    – Samuel Liew
    15 hours ago






  • 8





    For future reference, the official source repository for GNU libc is at <sourceware.org/git/?p=glibc.git>. <sourceware.org/git/?p=glibc.git;a=blob;f=string/…> does indeed show code similar to the above; however,a hand-written assembly language implementation from the sysdeps directory will be used instead, on most of glibc's supported architectures (the most commonly used architecture that doesn't have a replacement is MIPS).

    – zwol
    14 hours ago












  • Voting to close this as primarily opinion-based; "Are xxx really needed in xxx?" is subjective to people's opinions.

    – JL2210
    5 hours ago












  • @JL2210: Good point, fixed the title to capture the spirit of the question in a title that doesn't sound like it's wondering if performance is needed, just why we need these optimizations to get performance.

    – Peter Cordes
    40 mins ago












  • 1





    Comments are not for extended discussion; this conversation has been moved to chat.

    – Samuel Liew
    15 hours ago






  • 8





    For future reference, the official source repository for GNU libc is at <sourceware.org/git/?p=glibc.git>. <sourceware.org/git/?p=glibc.git;a=blob;f=string/…> does indeed show code similar to the above; however,a hand-written assembly language implementation from the sysdeps directory will be used instead, on most of glibc's supported architectures (the most commonly used architecture that doesn't have a replacement is MIPS).

    – zwol
    14 hours ago












  • Voting to close this as primarily opinion-based; "Are xxx really needed in xxx?" is subjective to people's opinions.

    – JL2210
    5 hours ago












  • @JL2210: Good point, fixed the title to capture the spirit of the question in a title that doesn't sound like it's wondering if performance is needed, just why we need these optimizations to get performance.

    – Peter Cordes
    40 mins ago







1




1





Comments are not for extended discussion; this conversation has been moved to chat.

– Samuel Liew
15 hours ago





Comments are not for extended discussion; this conversation has been moved to chat.

– Samuel Liew
15 hours ago




8




8





For future reference, the official source repository for GNU libc is at <sourceware.org/git/?p=glibc.git>. <sourceware.org/git/?p=glibc.git;a=blob;f=string/…> does indeed show code similar to the above; however,a hand-written assembly language implementation from the sysdeps directory will be used instead, on most of glibc's supported architectures (the most commonly used architecture that doesn't have a replacement is MIPS).

– zwol
14 hours ago






For future reference, the official source repository for GNU libc is at <sourceware.org/git/?p=glibc.git>. <sourceware.org/git/?p=glibc.git;a=blob;f=string/…> does indeed show code similar to the above; however,a hand-written assembly language implementation from the sysdeps directory will be used instead, on most of glibc's supported architectures (the most commonly used architecture that doesn't have a replacement is MIPS).

– zwol
14 hours ago














Voting to close this as primarily opinion-based; "Are xxx really needed in xxx?" is subjective to people's opinions.

– JL2210
5 hours ago






Voting to close this as primarily opinion-based; "Are xxx really needed in xxx?" is subjective to people's opinions.

– JL2210
5 hours ago














@JL2210: Good point, fixed the title to capture the spirit of the question in a title that doesn't sound like it's wondering if performance is needed, just why we need these optimizations to get performance.

– Peter Cordes
40 mins ago





@JL2210: Good point, fixed the title to capture the spirit of the question in a title that doesn't sound like it's wondering if performance is needed, just why we need these optimizations to get performance.

– Peter Cordes
40 mins ago












9 Answers
9






active

oldest

votes


















159















TL;DR you don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen with some very questionable speed hacks and assumptions:




  • unsigned long is either 4 or 8 bytes

  • bytes are 8 bits

  • a pointer can be cast to unsigned long long and not uintptr_t

  • one can align the pointer by anding the lower-order bits

  • one can break strict aliasing by addressing the string as unsigned longs

  • one can read past the end of array without any ill effects.

What is more, a good compiler could even replace code written as



size_t stupid_strlen(const char s[]) 
size_t i;
for (i=0; s[i] != ''; i++)
;
return i;



(notice that it has to be a type compatible with size_t) with a simple call to the builtin strlen - but it would be unlikely to notice that the longer code should do the same.




The strlen function is described by C11 7.24.6.3 as:




Description



  1. The strlen function computes the length of the string pointed to by s.

Returns



  1. The strlen function returns the number of characters that precede the terminating null character.



Now, if the string pointed to by s was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in



char *str = "hello world"; // or
char array[] = "hello world";


So really the only way in C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.




The linked strlen implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier.



Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.



The biggest problem is that in sizeof (unsigned long) - 1 times out of sizeof (unsigned long) cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!




The code, even though used to implement strlen in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen - I renamed the function to the_strlen here and added the following main:



int main(void) 
char buf[12];
printf("%zun", the_strlen(fgets(buf, 12, stdin)));



The buffer is carefully sized so that it can hold exactly the hello world string and the terminator. However on my 64-bit processor the unsigned long is 8 bytes, so the access to the latter part would exceed this buffer.



If I now compile with -fsanitize=undefined and -fsanitize=address and run the resulting program, I get:



% ./a.out
hello world
=================================================================
==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
READ of size 8 at 0x7ffffe63a3f8 thread T0
#0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
#1 0x55fbec46b139 in main (.../a.out+0x2139)
#2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
#3 0x55fbec46a949 in _start (.../a.out+0x1949)

Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
#0 0x55fbec46b07c in main (.../a.out+0x207c)

This frame has 1 object(s):
[32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
(longjmp and C++ exceptions *are* supported)
SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
Shadow bytes around the buggy address:
0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
=>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb
==8355==ABORTING


i.e. bad things happened.






share|improve this answer






















  • 71





    Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

    – Pete Becker
    17 hours ago







  • 1





    It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

    – Edward KMETT
    16 hours ago






  • 2





    Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

    – ghellquist
    16 hours ago






  • 1





    @ghellquist that's what he said

    – Antti Haapala
    16 hours ago






  • 23





    @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

    – jamesqf
    12 hours ago


















62















There's been a lot of (slightly or entirely) wrong guesses in comments about some details / background for this.



You're looking at glibc's optimized C fallback optimized implementation. (For ISAs that don't have a hand-written asm implementation). Or an old version of that code, which is still in the glibc source tree. https://code.woboq.org/userspace/glibc/string/strlen.c.html is a code-browser based on the current glibc git tree. Apparently it is still used by a few mainstream glibc targets, including MIPS. (Thanks @zwol).



On popular ISAs like x86 and ARM, glibc uses hand-written asm



So the incentive to change anything about this code is lower than you might think.



This bithack code (https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord) isn't what actually runs on your server/desktop/laptop/smartphone. It's better than a naive byte-at-a-time loop, but even this bithack is pretty bad compared to efficient asm for modern CPUs (especially x86 where AVX2 SIMD allows checking 32 bytes with a couple instructions, allowing 32 to 64 bytes per clock cycle in the main loop if data is hot in L1d cache on modern CPUs with 2/clock vector load and ALU throughput. i.e. for medium-sized strings where startup overhead doesn't dominate.)



glibc uses dynamic linking tricks to resolve strlen to an optimal version for your CPU, so even within x86 there's an SSE2 version (16-byte vectors, baseline for x86-64) and an AVX2 version (32-byte vectors).



x86 has efficient data transfer between vector and general-purpose registers, which makes it uniquely(?) good for using SIMD to speed up functions on implicit-length strings where the loop control is data dependent. pcmpeqb / pmovmskb makes it possible to testing 16 separate bytes at a time.



glibc has an AArch64 version like that using AdvSIMD, and a version for AArch64 CPUs where vector->GP registers stalls the pipeline, so it does actually use this bithack. But uses count-leading-zeros to find the byte-within-register once it gets a hit, and takes advantage of AArch64's efficient unaligned accesses after checking for page-crossing.



Also related: Why is this code 6.5x slower with optimizations enabled? has some more details about what's fast vs. slow in x86 asm for strlen with a large buffer and a simple asm implementation that might be good for gcc to know how to inline. (Some gcc versions unwisely inline rep scasb which is very slow, or a 4-byte-at-a-time bithack like this. So GCC's inline-strlen recipe needs updating or disabling.)



Asm doesn't have C-style "undefined behaviour"; it's safe to access bytes in memory however you like, and an aligned load that includes any valid bytes can't fault. Memory protection happens with aligned-page granularity; aligned accesses narrower than that can't cross a page boundary. Is it safe to read past the end of a buffer within the same page on x86 and x64? The same reasoning applies to the machine-code that this C hack gets compilers to create for a stand-alone non-inline implementation of this function.



When a compiler emits code to call an unknown non-inline function, it has to assume that function modifies any/all global variables and any memory it might possibly have a pointer to. i.e. everything except locals that haven't had their address escape have to be in sync in memory across the call. This applies to functions written in asm, obviously, but also to library functions. If you don't enable link-time optimization, it even applies to separate translation units (source files).




Why this is safe as part of glibc but not otherwise.



The most important factor is that this strlen can't inline into anything else. It's not safe for that; it contains strict-aliasing UB (reading char data through an unsigned long*). char* is allowed to alias anything else but the reverse is not true.



This is a library function for an ahead-of-time compiled library (glibc). It won't get inlined with link-time-optimization into callers. This means it just has to compile to safe machine code for a stand-alone version of strlen. It doesn't have to be portable / safe C.



The GNU C library only has to compile with GCC. Apparently it's not supported to compile it with clang or ICC, even though they support GNU extensions. GCC is an ahead-of-time compilers that turn a C source file into an object file of machine code. Not an interpreter, so unless it inlines at compile time, bytes in memory are just bytes in memory. i.e. strict-aliasing UB isn't dangerous when the accesses with different types happen in different functions that don't inline into each other.



Remember that strlen's behaviour is defined by the ISO C standard. That function name specifically is part of the implementation. Compilers like GCC even treat the name as a built-in function unless you use -fno-builtin-strlen, so strlen("foo") can be a compile-time constant 3. The definition in the library is only used when gcc decides to actually emit a call to it instead of inlining its own recipe or something.



When UB isn't visible to the compiler at compile time, you get sane machine code. The machine code has to work for the no-UB case, and even if you wanted to, there's no way for the asm to detect what types the caller used to put data into the pointed-to memory.



Glibc is compiled to a stand-alone static or dynamic library that can't inline with link-time optimization. glibc's build scripts don't create "fat" static libraries containing machine code + gcc GIMPLE internal representation for link-time optimization when inlining into a program. (i.e. libc.a won't participate in -flto link-time optimization into the main program.) Building glibc that way would be potentially unsafe on targets that actually use this .c.



In fact as @zwol comments, LTO can't be used when building glibc itself, because of "brittle" code like this which could break if inlining between glibc source files was possible. (There are some internal uses of strlen, e.g. maybe as part of the printf implementation)




This strlen makes some assumptions:




  • CHAR_BIT is a multiple of 8. True on all GNU systems. POSIX 2001 even guarantees CHAR_BIT == 8. (This looks safe for systems with CHAR_BIT= 16 or 32, like some DSPs; the unaligned-prologue loop will always run 0 iterations if sizeof(long) = sizeof(char) = 1 because every pointer is always aligned and p & sizeof(long)-1 is always zero.) But if you had a non-ASCII character set where chars are 7 bits wide, 0x8080... is the wrong pattern.

  • (maybe) unsigned long is 4 or 8 bytes. Or maybe it would actually work for any size of unsigned long up to 8, and it uses an assert() to check for that.

Those two aren't possible UB, they're just non-portability to some C implementations. This code is (or was) part of the C implementation on platforms where it does work, so that's fine.



The next assumption is potential C UB:




  • An aligned load that contains any valid bytes can't fault, and is safe as long as you ignore the bytes outside the object you actually want. (True in asm on every GNU systems, and on all normal CPUs because memory protection happens with aligned-page granularity. Is it safe to read past the end of a buffer within the same page on x86 and x64? safe in C when the UB isn't visible at compile time. Without inlining, this is the case here. The compiler can't prove that reading past the first 0 is UB; it could be a C char[] array containing 1,2,0,3 for example)

That last point is what makes it safe to read past the end of a C object here. That is pretty much safe even when inlining with current compilers because I think they don't currently treat that implying a path of execution is unreachable. But anyway, the strict aliasing is already a showstopper if you ever let this inline.



Then you'd have problems like the Linux kernel's old unsafe memcpy CPP macro that used pointer-casting to unsigned long (gcc, strict-aliasing, and horror stories).



This strlen dates back to the era when you could get away with stuff like that in general; it used to be pretty much safe without the "only when not inlining" caveat before GCC3.




UB that's only visible when looking across call/ret boundaries can't hurt us. (e.g. calling this on a char buf[] instead of on an array of unsigned long[] cast to a const char*). Once the machine code is set in stone, it's just dealing with bytes in memory. A non-inline function call has to assume that the callee reads any/all memory.




Writing this safely, without strict-aliasing UB



The GCC type attribute may_alias gives a type the same alias-anything treatment as char*. (Suggested by @KonradBorowsk). GCC headers currently use it for x86 SIMD vector types like __m128i so you can always safely do _mm_loadu_si128( (__m128i*)foo ). (See Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior? for more details about what this does and doesn't mean.)



strlen(const char *char_ptr)

typedef unsigned long __attribute__((may_alias)) aliasing_ulong;

aliasing_ulong *longword_ptr = (aliasing_ulong *)char_ptr;
for (;;)
unsigned long ulong = *longword_ptr++; // can safely alias anything
...




You could also use aligned(1) to express a type with alignof(T) = 1.
typedef unsigned long __attribute__((may_alias, aligned(1))) unaligned_aliasing_ulong;



A portable way to express an aliasing load in ISO is with memcpy, which modern compilers do know how to inline as a single load instruction. e.g.



 unsigned long longword;
memcpy(&longword, char_ptr, sizeof(longword));
char_ptr += sizeof(longword);


This also works for unaligned loads because memcpy works as-if by char-at-a-time access. But in practice modern compilers understand memcpy very well.



The danger here is that if GCC doesn't know for sure that char_ptr is word-aligned, it won't inline it on some platforms that might not support unaligned loads in asm. e.g. MIPS before MIPS64r6, or older ARM. If you got an actual function call to memcpy just to load a word (and leave it in other memory), that would be a disaster. GCC can sometimes see when code aligns a pointer. Or after the char-at-a-time loop that reaches a ulong boundary you could use
p = __builtin_assume_aligned(p, sizeof(unsigned long));



This doesn't avoid the read-past-the-object possible UB, but with current GCC that's not dangerous in practice.




Why hand-optimized C source is necessary: current compilers aren't good enough



Hand-optimized asm can be even better when you want every last drop of performance for a widely-used standard library function. Especially for something like memcpy, but also strlen. In this case it wouldn't be much easier to use C with x86 intrinsics to take advantage of SSE2.



But here we're just talking about a naive vs. bithack C version without any ISA-specific features.



(I think we can take it as a given that strlen is widely enough used that making it run as fast as possible is important. So the question becomes whether we can get efficient machine code from simpler source. No, we can't.)



Current GCC and clang are not capable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. (e.g. it has to be possible to check if the loop will run at least 16 iterations before running the first iteration.) e.g. autovectorizing memcpy is possible (explicit-length buffer) but not strcpy or strlen (implicit-length string), given current compilers.



That includes search loops, or any other loop with a data-dependent if()break as well as a counter.



ICC (Intel's compiler for x86) can auto-vectorize some search loops, but still only makes naive byte-at-a-time asm for a simple / naive C strlen like OpenBSD's libc uses. (Godbolt). (From @Peske's answer).



A hand-optimized libc strlen is necessary for performance with current compilers. Going 1 byte at a time (with unrolling maybe 2 bytes per cycle on wide superscalar CPUs) is pathetic when main memory can keep up with about 8 bytes per cycle, and L1d cache can deliver 16 to 64 per cycle. (2x 32-byte loads per cycle on modern mainstream x86 CPUs since Haswell and Ryzen. Not counting AVX512 which can reduce clock speeds just for using 512-bit vectors; which is why glibc probably isn't in a hurry to add an AVX512 version. Although with 256-bit vectors, AVX512VL + BW masked compare into a mask and ktest or kortest could make strlen more hyperthreading friendly by reducing its uops / iteration.)



I'm including non-x86 here, that's the "16 bytes". e.g. most AArch64 CPUs can do at least that, I think, and some certainly more. And some have enough execution throughput for strlen to keep up with that load bandwidth.



Of course programs that work with large strings should usually keep track of lengths to avoid having to redo finding the length of implicit-length C strings very often. But short to medium length performance still benefits from hand-written implementations, and I'm sure some programs do end up using strlen on medium-length strings.






share|improve this answer






















  • 5





    A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

    – zwol
    14 hours ago











  • @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

    – Shared
    9 hours ago






  • 1





    Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

    – Konrad Borowski
    7 hours ago











  • @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

    – Peter Cordes
    2 hours ago











  • @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

    – Peter Cordes
    57 mins ago



















54















It is explained in the comments in the file you linked:



 27 /* Return the length of the null-terminated string STR. Scan for
28 the null terminator quickly by testing four bytes at a time. */


and:



 73 /* Instead of the traditional loop which tests each character,
74 we will test a longword at a time. The tricky part is testing
75 if *any of the four* bytes in the longword in question are zero. */


In C, it is possible to reason in detail about the efficiency.



It is less efficient to iterate through individual characters looking for a null than it is to test more than one byte at a time, as this code does.



The additional complexity comes from needing to ensure that the string under test is aligned in the right place to start testing more than one byte at a time (along a longword boundary, as described in the comments), and from needing to ensure that the assumptions about the sizes of the datatypes are not violated when the code is used.



In most (but not all) modern software development, this attention to efficiency detail is not necessary, or not worth the cost of extra code complexity.



One place where it does make sense to pay attention to efficiency like this is in standard libraries, like the example you linked.




If you want to read more about word boundaries, see this question, and this excellent wikipedia page






share|improve this answer


































    29















    In addition to the great answers here, I want to point out that the code linked in the question is for GNU's implementation of strlen.



    The OpenBSD implementation of strlen is very similar to the code proposed in the question. The complexity of an implementation is determined by the author.



    ...
    #include <string.h>

    size_t
    strlen(const char *str)

    const char *s;

    for (s = str; *s; ++s)
    ;
    return (s - str);


    DEF_STRONG(strlen);



    EDIT: The OpenBSD code I linked above looks to be a fallback implementation for ISAs that don't have there own asm implementation. There are different implementations of strlen depending on architecture. The code for amd64 strlen, for example, is asm. Similar to PeterCordes' comments/answer pointing out that the non-fallback GNU implementations are asm as well.






    share|improve this answer










    New contributor



    Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.
















    • 4





      That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

      – Jason
      yesterday






    • 10





      It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

      – Peter Cordes
      yesterday






    • 4





      Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

      – Antti Haapala
      yesterday






    • 1





      @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

      – Peter Cordes
      yesterday






    • 2





      @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

      – dchest
      14 hours ago



















    26















    You want code to be correct, maintainable, and fast. These factors have different importance:



    "correct" is absolutely essential.



    "maintainable" depends on how much you are going to maintain the code: strlen has been a Standard C library function for over 40 years. It's not going to change. Maintainability is therefore quite unimportant - for this function.



    "Fast": In many applications, strcpy, strlen etc. use a significant amount of the execution time. To achieve the same overall speed gain as this complicated, but not very complicated implementation of strlen by improving the compiler would take heroic efforts.



    Being fast has another advantage: When programmers find out that calling "strlen" is the fastest method they can measure the number of bytes in a string, they are not tempted anymore to write their own code to make things faster.



    So for strlen, speed is much more important, and maintainability much less important, than for most code that you will ever write.



    Why must it be so complicated? Say you have a 1,000 byte string. The simple implementation will examine 1,000 bytes. A current implementation would likely examine 64 bit words at a time, which means 125 64-bit or eight-byte words. It might even use vector instructions examining say 32 bytes at a time, which would be even more complicated and even faster. Using vector instructions leads to code that is a bit more complicated but quite straightforward, checking whether one of eight bytes in a 64 bit word is zero requires some clever tricks. So for medium to long strings this code can be expected to be about four times faster. For a function as important as strlen, that's worth writing a more complex function.



    PS. The code is not very portable. But it's part of the Standard C library, which is part of the implementation - it need not be portable.



    PPS. Someone posted an example where a debugging tool complained about accessing bytes past the end of a string. An implementation can be designed that guarantees the following: If p is a valid pointer to a byte, then any access to a byte in the same aligned block that would be undefined behaviour according to the C standard, will return an unspecified value.



    PPPS. Intel has added instructions to their later processors that form a building block for the strstr() function (finding a substring in a string). Their description is mind boggling, but they can make that particular function probably 100 times faster. (Basically, given an array a containing "Hello, world!" and an array b starting with 16 bytes "HelloHelloHelloH" and containing more bytes, it figures out that the string a doesn't occur in b earlier than starting at index 15).






    share|improve this answer



























    • Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

      – Baldrickk
      15 hours ago











    • Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

      – gnasher729
      8 hours ago


















    23















    In short, this is a performance optimization the standard library can do by knowing what compiler it is compiled with - you shouldn't write code like this, unless you are writing a standard library and can depend on a specific compiler. Specifically, it's processing alignment number of bytes at the same time - 4 on 32-bit platforms, 8 on 64-bit platforms. This means it can be 4 or 8 times faster than naïve byte iteration.



    To explain how does this work, consider the following image. Assume the 32-bit platform here (4 bytes alignment).





    Let's say that the letter "H" of "Hello, world!" string was provided as an argument for strlen. Because the CPU likes having things aligned in memory (ideally, address % sizeof(size_t) == 0), the bytes before the alignment are processed byte-by-byte, using slow method.



    Then, for each alignment-sized chunk, by calculating (longbits - 0x01010101) & 0x80808080 != 0 it checks whether any of the bytes within an integer is zero. This calculation has a false positive when at least one of bytes is higher than 0x80, but more often than not it should work. If that's not the case (as it is in yellow area), the length is increased by alignment size.



    If any of bytes within an integer turns out to be zero (or 0x81), then the string is checked byte-by-byte to determine the position of zero.



    This can make an out-of-bounds access, however because it's within an alignment, it's more likely than not to be fine, memory mapping units usually don't have byte level precision.






    share|improve this answer






















    • 17





      Upvoted for excellent graphical representation alone :D

      – Antti Haapala
      18 hours ago











    • This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

      – Peter Cordes
      15 hours ago











    • size_t is not guaranteed to be aligned.

      – JL2210
      5 hours ago


















    20















    Briefly: checking a string byte by byte will potentially be slow on architectures that can fetch larger amounts of data at a time.



    If the check for null termination could be done on 32 or 64 bit basis, it reduces the amount of checks the compiler has to perform. That's what the linked code attempts to do, with a specific system in mind. They make assumptions about addressing, alignment, cache use, non-standard compiler setups etc etc.



    Reading byte by byte as in your example would be a sensible approach on a 8 bit CPU, or when writing a portable lib written in standard C.



    Looking at C standard libs for advise how to write fast/good code isn't a good idea, because it will be non-portable and rely on non-standard assumptions or poorly-defined behavior. If you are a beginner, reading such code will likely be more harmful than educational.






    share|improve this answer




















    • 1





      Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

      – russbishop
      yesterday






    • 3





      @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

      – Peter Cordes
      yesterday



















    0















    One important thing not mentioned by the other answers is that the FSF is very cautious about ensuring that proprietary code does not make it into GNU projects. In the GNU Coding Standards under Referring to Proprietary Programs, there is a warning about organising your implementation in a way that it cannot be confused with existing proprietary code:




    Don’t in any circumstances refer to Unix source code for or during your work on GNU! (Or to any other proprietary programs.)



    If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results.



    For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different.




    (Emphasis mine.)






    share|improve this answer

























    • How does this answer the question?

      – JL2210
      5 hours ago


















    0















    Yes, strlen requires optimization. It's a function that is called a lot, and if it is slow (as your implementation is), your program could run a full 2-3 seconds slower.



    The optimized C code (bitwise AND-ing with some magic values) is just part of the portable fallback implementation, and is often replaced with machine-specific assembly code from the sysdeps directory. These examples are often a lot faster (running in a few milliseconds or less).



    Your (naive) code iterates on every single character of the string, and jumps around every time the character is not null. This can be very slow for large inputs (upwards of ten seconds) and as such is not very good for program speed. However, the optimized code divides the amount of computations by the size of long on your platform (usually 8 or 4); as such, it is much faster than the naive C implementation.






    share|improve this answer





























      Your Answer






      StackExchange.ifUsing("editor", function ()
      StackExchange.using("externalEditor", function ()
      StackExchange.using("snippets", function ()
      StackExchange.snippets.init();
      );
      );
      , "code-snippets");

      StackExchange.ready(function()
      var channelOptions =
      tags: "".split(" "),
      id: "1"
      ;
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function()
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled)
      StackExchange.using("snippets", function()
      createEditor();
      );

      else
      createEditor();

      );

      function createEditor()
      StackExchange.prepareEditor(
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader:
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      ,
      onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      );



      );






      Shared is a new contributor. Be nice, and check out our Code of Conduct.









      draft saved

      draft discarded


















      StackExchange.ready(
      function ()
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f57650895%2fwhy-does-glibcs-strlen-need-to-be-so-complicated-to-run-fast%23new-answer', 'question_page');

      );

      Post as a guest















      Required, but never shown

























      9 Answers
      9






      active

      oldest

      votes








      9 Answers
      9






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      159















      TL;DR you don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen with some very questionable speed hacks and assumptions:




      • unsigned long is either 4 or 8 bytes

      • bytes are 8 bits

      • a pointer can be cast to unsigned long long and not uintptr_t

      • one can align the pointer by anding the lower-order bits

      • one can break strict aliasing by addressing the string as unsigned longs

      • one can read past the end of array without any ill effects.

      What is more, a good compiler could even replace code written as



      size_t stupid_strlen(const char s[]) 
      size_t i;
      for (i=0; s[i] != ''; i++)
      ;
      return i;



      (notice that it has to be a type compatible with size_t) with a simple call to the builtin strlen - but it would be unlikely to notice that the longer code should do the same.




      The strlen function is described by C11 7.24.6.3 as:




      Description



      1. The strlen function computes the length of the string pointed to by s.

      Returns



      1. The strlen function returns the number of characters that precede the terminating null character.



      Now, if the string pointed to by s was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in



      char *str = "hello world"; // or
      char array[] = "hello world";


      So really the only way in C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.




      The linked strlen implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier.



      Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.



      The biggest problem is that in sizeof (unsigned long) - 1 times out of sizeof (unsigned long) cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!




      The code, even though used to implement strlen in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen - I renamed the function to the_strlen here and added the following main:



      int main(void) 
      char buf[12];
      printf("%zun", the_strlen(fgets(buf, 12, stdin)));



      The buffer is carefully sized so that it can hold exactly the hello world string and the terminator. However on my 64-bit processor the unsigned long is 8 bytes, so the access to the latter part would exceed this buffer.



      If I now compile with -fsanitize=undefined and -fsanitize=address and run the resulting program, I get:



      % ./a.out
      hello world
      =================================================================
      ==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
      READ of size 8 at 0x7ffffe63a3f8 thread T0
      #0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
      #1 0x55fbec46b139 in main (.../a.out+0x2139)
      #2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
      #3 0x55fbec46a949 in _start (.../a.out+0x1949)

      Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
      #0 0x55fbec46b07c in main (.../a.out+0x207c)

      This frame has 1 object(s):
      [32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
      HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
      SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
      Shadow bytes around the buggy address:
      0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
      0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable: 00
      Partially addressable: 01 02 03 04 05 06 07
      Heap left redzone: fa
      Freed heap region: fd
      Stack left redzone: f1
      Stack mid redzone: f2
      Stack right redzone: f3
      Stack after return: f5
      Stack use after scope: f8
      Global redzone: f9
      Global init order: f6
      Poisoned by user: f7
      Container overflow: fc
      Array cookie: ac
      Intra object redzone: bb
      ASan internal: fe
      Left alloca redzone: ca
      Right alloca redzone: cb
      ==8355==ABORTING


      i.e. bad things happened.






      share|improve this answer






















      • 71





        Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

        – Pete Becker
        17 hours ago







      • 1





        It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

        – Edward KMETT
        16 hours ago






      • 2





        Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

        – ghellquist
        16 hours ago






      • 1





        @ghellquist that's what he said

        – Antti Haapala
        16 hours ago






      • 23





        @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

        – jamesqf
        12 hours ago















      159















      TL;DR you don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen with some very questionable speed hacks and assumptions:




      • unsigned long is either 4 or 8 bytes

      • bytes are 8 bits

      • a pointer can be cast to unsigned long long and not uintptr_t

      • one can align the pointer by anding the lower-order bits

      • one can break strict aliasing by addressing the string as unsigned longs

      • one can read past the end of array without any ill effects.

      What is more, a good compiler could even replace code written as



      size_t stupid_strlen(const char s[]) 
      size_t i;
      for (i=0; s[i] != ''; i++)
      ;
      return i;



      (notice that it has to be a type compatible with size_t) with a simple call to the builtin strlen - but it would be unlikely to notice that the longer code should do the same.




      The strlen function is described by C11 7.24.6.3 as:




      Description



      1. The strlen function computes the length of the string pointed to by s.

      Returns



      1. The strlen function returns the number of characters that precede the terminating null character.



      Now, if the string pointed to by s was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in



      char *str = "hello world"; // or
      char array[] = "hello world";


      So really the only way in C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.




      The linked strlen implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier.



      Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.



      The biggest problem is that in sizeof (unsigned long) - 1 times out of sizeof (unsigned long) cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!




      The code, even though used to implement strlen in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen - I renamed the function to the_strlen here and added the following main:



      int main(void) 
      char buf[12];
      printf("%zun", the_strlen(fgets(buf, 12, stdin)));



      The buffer is carefully sized so that it can hold exactly the hello world string and the terminator. However on my 64-bit processor the unsigned long is 8 bytes, so the access to the latter part would exceed this buffer.



      If I now compile with -fsanitize=undefined and -fsanitize=address and run the resulting program, I get:



      % ./a.out
      hello world
      =================================================================
      ==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
      READ of size 8 at 0x7ffffe63a3f8 thread T0
      #0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
      #1 0x55fbec46b139 in main (.../a.out+0x2139)
      #2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
      #3 0x55fbec46a949 in _start (.../a.out+0x1949)

      Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
      #0 0x55fbec46b07c in main (.../a.out+0x207c)

      This frame has 1 object(s):
      [32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
      HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
      SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
      Shadow bytes around the buggy address:
      0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
      0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable: 00
      Partially addressable: 01 02 03 04 05 06 07
      Heap left redzone: fa
      Freed heap region: fd
      Stack left redzone: f1
      Stack mid redzone: f2
      Stack right redzone: f3
      Stack after return: f5
      Stack use after scope: f8
      Global redzone: f9
      Global init order: f6
      Poisoned by user: f7
      Container overflow: fc
      Array cookie: ac
      Intra object redzone: bb
      ASan internal: fe
      Left alloca redzone: ca
      Right alloca redzone: cb
      ==8355==ABORTING


      i.e. bad things happened.






      share|improve this answer






















      • 71





        Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

        – Pete Becker
        17 hours ago







      • 1





        It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

        – Edward KMETT
        16 hours ago






      • 2





        Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

        – ghellquist
        16 hours ago






      • 1





        @ghellquist that's what he said

        – Antti Haapala
        16 hours ago






      • 23





        @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

        – jamesqf
        12 hours ago













      159














      159










      159









      TL;DR you don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen with some very questionable speed hacks and assumptions:




      • unsigned long is either 4 or 8 bytes

      • bytes are 8 bits

      • a pointer can be cast to unsigned long long and not uintptr_t

      • one can align the pointer by anding the lower-order bits

      • one can break strict aliasing by addressing the string as unsigned longs

      • one can read past the end of array without any ill effects.

      What is more, a good compiler could even replace code written as



      size_t stupid_strlen(const char s[]) 
      size_t i;
      for (i=0; s[i] != ''; i++)
      ;
      return i;



      (notice that it has to be a type compatible with size_t) with a simple call to the builtin strlen - but it would be unlikely to notice that the longer code should do the same.




      The strlen function is described by C11 7.24.6.3 as:




      Description



      1. The strlen function computes the length of the string pointed to by s.

      Returns



      1. The strlen function returns the number of characters that precede the terminating null character.



      Now, if the string pointed to by s was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in



      char *str = "hello world"; // or
      char array[] = "hello world";


      So really the only way in C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.




      The linked strlen implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier.



      Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.



      The biggest problem is that in sizeof (unsigned long) - 1 times out of sizeof (unsigned long) cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!




      The code, even though used to implement strlen in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen - I renamed the function to the_strlen here and added the following main:



      int main(void) 
      char buf[12];
      printf("%zun", the_strlen(fgets(buf, 12, stdin)));



      The buffer is carefully sized so that it can hold exactly the hello world string and the terminator. However on my 64-bit processor the unsigned long is 8 bytes, so the access to the latter part would exceed this buffer.



      If I now compile with -fsanitize=undefined and -fsanitize=address and run the resulting program, I get:



      % ./a.out
      hello world
      =================================================================
      ==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
      READ of size 8 at 0x7ffffe63a3f8 thread T0
      #0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
      #1 0x55fbec46b139 in main (.../a.out+0x2139)
      #2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
      #3 0x55fbec46a949 in _start (.../a.out+0x1949)

      Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
      #0 0x55fbec46b07c in main (.../a.out+0x207c)

      This frame has 1 object(s):
      [32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
      HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
      SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
      Shadow bytes around the buggy address:
      0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
      0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable: 00
      Partially addressable: 01 02 03 04 05 06 07
      Heap left redzone: fa
      Freed heap region: fd
      Stack left redzone: f1
      Stack mid redzone: f2
      Stack right redzone: f3
      Stack after return: f5
      Stack use after scope: f8
      Global redzone: f9
      Global init order: f6
      Poisoned by user: f7
      Container overflow: fc
      Array cookie: ac
      Intra object redzone: bb
      ASan internal: fe
      Left alloca redzone: ca
      Right alloca redzone: cb
      ==8355==ABORTING


      i.e. bad things happened.






      share|improve this answer















      TL;DR you don't need and you should never write code like that - especially if you're not a C compiler / standard library vendor. It is code used to implement strlen with some very questionable speed hacks and assumptions:




      • unsigned long is either 4 or 8 bytes

      • bytes are 8 bits

      • a pointer can be cast to unsigned long long and not uintptr_t

      • one can align the pointer by anding the lower-order bits

      • one can break strict aliasing by addressing the string as unsigned longs

      • one can read past the end of array without any ill effects.

      What is more, a good compiler could even replace code written as



      size_t stupid_strlen(const char s[]) 
      size_t i;
      for (i=0; s[i] != ''; i++)
      ;
      return i;



      (notice that it has to be a type compatible with size_t) with a simple call to the builtin strlen - but it would be unlikely to notice that the longer code should do the same.




      The strlen function is described by C11 7.24.6.3 as:




      Description



      1. The strlen function computes the length of the string pointed to by s.

      Returns



      1. The strlen function returns the number of characters that precede the terminating null character.



      Now, if the string pointed to by s was in an array of characters just long enough to contain the string and the terminating NUL, the behaviour will be undefined if we access the string past the null terminator, for example in



      char *str = "hello world"; // or
      char array[] = "hello world";


      So really the only way in C to implement this correctly is the way it is written in your question, except for trivial transformations - you can pretend to be faster by unrolling the loop etc, but it still needs to be done one byte at a time.




      The linked strlen implementation first checks the bytes individually until the pointer is pointing to the natural 4 or 8 byte alignment boundary of the unsigned long. The C standard says that accessing a pointer that is not properly aligned has undefined behaviour, so this absolutely has to be done for the next dirty trick to be even dirtier.



      Now comes the dirty part: the code breaks the promise and reads 4 or 8 8-bit bytes at a time (a long int), and uses a bit trick with unsigned addition to quickly figure out if there were any zero bytes within those 4 or 8 bytes - it uses a specially crafted number to that would cause the carry bit to change bits that are caught by a bit mask. In essence this would then figure out if any of the 4 or 8 bytes in the mask are zeroes supposedly faster than looping through each of these bytes would. Finally there is a loop at the end to figure out which byte was the first zero, if any, and to return the result.



      The biggest problem is that in sizeof (unsigned long) - 1 times out of sizeof (unsigned long) cases it will read past the end of the string - only if the null byte is in the last accessed byte (i.e. in little-endian the most significant, and in big-endian the least significant), does it not access the array out of bounds!




      The code, even though used to implement strlen in a C standard library is bad code. It has several implementation-defined and undefined aspects in it and it should not be used anywhere instead of the system-provided strlen - I renamed the function to the_strlen here and added the following main:



      int main(void) 
      char buf[12];
      printf("%zun", the_strlen(fgets(buf, 12, stdin)));



      The buffer is carefully sized so that it can hold exactly the hello world string and the terminator. However on my 64-bit processor the unsigned long is 8 bytes, so the access to the latter part would exceed this buffer.



      If I now compile with -fsanitize=undefined and -fsanitize=address and run the resulting program, I get:



      % ./a.out
      hello world
      =================================================================
      ==8355==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffffe63a3f8 at pc 0x55fbec46ab6c bp 0x7ffffe63a350 sp 0x7ffffe63a340
      READ of size 8 at 0x7ffffe63a3f8 thread T0
      #0 0x55fbec46ab6b in the_strlen (.../a.out+0x1b6b)
      #1 0x55fbec46b139 in main (.../a.out+0x2139)
      #2 0x7f4f0848fb96 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x21b96)
      #3 0x55fbec46a949 in _start (.../a.out+0x1949)

      Address 0x7ffffe63a3f8 is located in stack of thread T0 at offset 40 in frame
      #0 0x55fbec46b07c in main (.../a.out+0x207c)

      This frame has 1 object(s):
      [32, 44) 'buf' <== Memory access at offset 40 partially overflows this variable
      HINT: this may be a false positive if your program uses some custom stack unwind mechanism or swapcontext
      (longjmp and C++ exceptions *are* supported)
      SUMMARY: AddressSanitizer: stack-buffer-overflow (.../a.out+0x1b6b) in the_strlen
      Shadow bytes around the buggy address:
      0x10007fcbf420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf430: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf460: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      =>0x10007fcbf470: 00 00 00 00 00 00 00 00 00 00 f1 f1 f1 f1 00[04]
      0x10007fcbf480: f2 f2 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      0x10007fcbf4c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
      Shadow byte legend (one shadow byte represents 8 application bytes):
      Addressable: 00
      Partially addressable: 01 02 03 04 05 06 07
      Heap left redzone: fa
      Freed heap region: fd
      Stack left redzone: f1
      Stack mid redzone: f2
      Stack right redzone: f3
      Stack after return: f5
      Stack use after scope: f8
      Global redzone: f9
      Global init order: f6
      Poisoned by user: f7
      Container overflow: fc
      Array cookie: ac
      Intra object redzone: bb
      ASan internal: fe
      Left alloca redzone: ca
      Right alloca redzone: cb
      ==8355==ABORTING


      i.e. bad things happened.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited yesterday









      S. Sharma

      1468 bronze badges




      1468 bronze badges










      answered 2 days ago









      Antti HaapalaAntti Haapala

      92.6k18 gold badges180 silver badges218 bronze badges




      92.6k18 gold badges180 silver badges218 bronze badges










      • 71





        Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

        – Pete Becker
        17 hours ago







      • 1





        It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

        – Edward KMETT
        16 hours ago






      • 2





        Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

        – ghellquist
        16 hours ago






      • 1





        @ghellquist that's what he said

        – Antti Haapala
        16 hours ago






      • 23





        @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

        – jamesqf
        12 hours ago












      • 71





        Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

        – Pete Becker
        17 hours ago







      • 1





        It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

        – Edward KMETT
        16 hours ago






      • 2





        Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

        – ghellquist
        16 hours ago






      • 1





        @ghellquist that's what he said

        – Antti Haapala
        16 hours ago






      • 23





        @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

        – jamesqf
        12 hours ago







      71




      71





      Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

      – Pete Becker
      17 hours ago






      Re: "very questionable speed hacks and assumptions" -- that is, very questionable in portable code. The standard library is written for a particular compiler/hardware combination, with knowledge of the actual behavior of things that the language definition leaves as undefined. Yes, most people should not be writing code like that, but in the context of implementing the standard library non-portable is not inherently bad.

      – Pete Becker
      17 hours ago





      1




      1





      It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

      – Edward KMETT
      16 hours ago





      It is worth noting that it doesn't actually check if one of the bytes is definitely 0, merely that one is likely to be zero. The masking system used only properly checks roughly 7 out of the 8 bits per byte. The detailed scan that is used to determine which byte was the 0 (if any) handles the false positive case by falling back to the surrounding for(;;)

      – Edward KMETT
      16 hours ago




      2




      2





      Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

      – ghellquist
      16 hours ago





      Agree, never write things like this yourself. Or almost never. Premature optimization is the source of all evil. (In this case it could actually be motivated though). If you end up doing a lot of strlen() calls on the same very long string, your application could perhaps be written differently. You migt as example save the stringlength in a variable already when the string is created, and not need to call strlen() at all.

      – ghellquist
      16 hours ago




      1




      1





      @ghellquist that's what he said

      – Antti Haapala
      16 hours ago





      @ghellquist that's what he said

      – Antti Haapala
      16 hours ago




      23




      23





      @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

      – jamesqf
      12 hours ago





      @ghellquist: Optimizing a frequently-used library call is hardly "premature optimization".

      – jamesqf
      12 hours ago













      62















      There's been a lot of (slightly or entirely) wrong guesses in comments about some details / background for this.



      You're looking at glibc's optimized C fallback optimized implementation. (For ISAs that don't have a hand-written asm implementation). Or an old version of that code, which is still in the glibc source tree. https://code.woboq.org/userspace/glibc/string/strlen.c.html is a code-browser based on the current glibc git tree. Apparently it is still used by a few mainstream glibc targets, including MIPS. (Thanks @zwol).



      On popular ISAs like x86 and ARM, glibc uses hand-written asm



      So the incentive to change anything about this code is lower than you might think.



      This bithack code (https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord) isn't what actually runs on your server/desktop/laptop/smartphone. It's better than a naive byte-at-a-time loop, but even this bithack is pretty bad compared to efficient asm for modern CPUs (especially x86 where AVX2 SIMD allows checking 32 bytes with a couple instructions, allowing 32 to 64 bytes per clock cycle in the main loop if data is hot in L1d cache on modern CPUs with 2/clock vector load and ALU throughput. i.e. for medium-sized strings where startup overhead doesn't dominate.)



      glibc uses dynamic linking tricks to resolve strlen to an optimal version for your CPU, so even within x86 there's an SSE2 version (16-byte vectors, baseline for x86-64) and an AVX2 version (32-byte vectors).



      x86 has efficient data transfer between vector and general-purpose registers, which makes it uniquely(?) good for using SIMD to speed up functions on implicit-length strings where the loop control is data dependent. pcmpeqb / pmovmskb makes it possible to testing 16 separate bytes at a time.



      glibc has an AArch64 version like that using AdvSIMD, and a version for AArch64 CPUs where vector->GP registers stalls the pipeline, so it does actually use this bithack. But uses count-leading-zeros to find the byte-within-register once it gets a hit, and takes advantage of AArch64's efficient unaligned accesses after checking for page-crossing.



      Also related: Why is this code 6.5x slower with optimizations enabled? has some more details about what's fast vs. slow in x86 asm for strlen with a large buffer and a simple asm implementation that might be good for gcc to know how to inline. (Some gcc versions unwisely inline rep scasb which is very slow, or a 4-byte-at-a-time bithack like this. So GCC's inline-strlen recipe needs updating or disabling.)



      Asm doesn't have C-style "undefined behaviour"; it's safe to access bytes in memory however you like, and an aligned load that includes any valid bytes can't fault. Memory protection happens with aligned-page granularity; aligned accesses narrower than that can't cross a page boundary. Is it safe to read past the end of a buffer within the same page on x86 and x64? The same reasoning applies to the machine-code that this C hack gets compilers to create for a stand-alone non-inline implementation of this function.



      When a compiler emits code to call an unknown non-inline function, it has to assume that function modifies any/all global variables and any memory it might possibly have a pointer to. i.e. everything except locals that haven't had their address escape have to be in sync in memory across the call. This applies to functions written in asm, obviously, but also to library functions. If you don't enable link-time optimization, it even applies to separate translation units (source files).




      Why this is safe as part of glibc but not otherwise.



      The most important factor is that this strlen can't inline into anything else. It's not safe for that; it contains strict-aliasing UB (reading char data through an unsigned long*). char* is allowed to alias anything else but the reverse is not true.



      This is a library function for an ahead-of-time compiled library (glibc). It won't get inlined with link-time-optimization into callers. This means it just has to compile to safe machine code for a stand-alone version of strlen. It doesn't have to be portable / safe C.



      The GNU C library only has to compile with GCC. Apparently it's not supported to compile it with clang or ICC, even though they support GNU extensions. GCC is an ahead-of-time compilers that turn a C source file into an object file of machine code. Not an interpreter, so unless it inlines at compile time, bytes in memory are just bytes in memory. i.e. strict-aliasing UB isn't dangerous when the accesses with different types happen in different functions that don't inline into each other.



      Remember that strlen's behaviour is defined by the ISO C standard. That function name specifically is part of the implementation. Compilers like GCC even treat the name as a built-in function unless you use -fno-builtin-strlen, so strlen("foo") can be a compile-time constant 3. The definition in the library is only used when gcc decides to actually emit a call to it instead of inlining its own recipe or something.



      When UB isn't visible to the compiler at compile time, you get sane machine code. The machine code has to work for the no-UB case, and even if you wanted to, there's no way for the asm to detect what types the caller used to put data into the pointed-to memory.



      Glibc is compiled to a stand-alone static or dynamic library that can't inline with link-time optimization. glibc's build scripts don't create "fat" static libraries containing machine code + gcc GIMPLE internal representation for link-time optimization when inlining into a program. (i.e. libc.a won't participate in -flto link-time optimization into the main program.) Building glibc that way would be potentially unsafe on targets that actually use this .c.



      In fact as @zwol comments, LTO can't be used when building glibc itself, because of "brittle" code like this which could break if inlining between glibc source files was possible. (There are some internal uses of strlen, e.g. maybe as part of the printf implementation)




      This strlen makes some assumptions:




      • CHAR_BIT is a multiple of 8. True on all GNU systems. POSIX 2001 even guarantees CHAR_BIT == 8. (This looks safe for systems with CHAR_BIT= 16 or 32, like some DSPs; the unaligned-prologue loop will always run 0 iterations if sizeof(long) = sizeof(char) = 1 because every pointer is always aligned and p & sizeof(long)-1 is always zero.) But if you had a non-ASCII character set where chars are 7 bits wide, 0x8080... is the wrong pattern.

      • (maybe) unsigned long is 4 or 8 bytes. Or maybe it would actually work for any size of unsigned long up to 8, and it uses an assert() to check for that.

      Those two aren't possible UB, they're just non-portability to some C implementations. This code is (or was) part of the C implementation on platforms where it does work, so that's fine.



      The next assumption is potential C UB:




      • An aligned load that contains any valid bytes can't fault, and is safe as long as you ignore the bytes outside the object you actually want. (True in asm on every GNU systems, and on all normal CPUs because memory protection happens with aligned-page granularity. Is it safe to read past the end of a buffer within the same page on x86 and x64? safe in C when the UB isn't visible at compile time. Without inlining, this is the case here. The compiler can't prove that reading past the first 0 is UB; it could be a C char[] array containing 1,2,0,3 for example)

      That last point is what makes it safe to read past the end of a C object here. That is pretty much safe even when inlining with current compilers because I think they don't currently treat that implying a path of execution is unreachable. But anyway, the strict aliasing is already a showstopper if you ever let this inline.



      Then you'd have problems like the Linux kernel's old unsafe memcpy CPP macro that used pointer-casting to unsigned long (gcc, strict-aliasing, and horror stories).



      This strlen dates back to the era when you could get away with stuff like that in general; it used to be pretty much safe without the "only when not inlining" caveat before GCC3.




      UB that's only visible when looking across call/ret boundaries can't hurt us. (e.g. calling this on a char buf[] instead of on an array of unsigned long[] cast to a const char*). Once the machine code is set in stone, it's just dealing with bytes in memory. A non-inline function call has to assume that the callee reads any/all memory.




      Writing this safely, without strict-aliasing UB



      The GCC type attribute may_alias gives a type the same alias-anything treatment as char*. (Suggested by @KonradBorowsk). GCC headers currently use it for x86 SIMD vector types like __m128i so you can always safely do _mm_loadu_si128( (__m128i*)foo ). (See Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior? for more details about what this does and doesn't mean.)



      strlen(const char *char_ptr)

      typedef unsigned long __attribute__((may_alias)) aliasing_ulong;

      aliasing_ulong *longword_ptr = (aliasing_ulong *)char_ptr;
      for (;;)
      unsigned long ulong = *longword_ptr++; // can safely alias anything
      ...




      You could also use aligned(1) to express a type with alignof(T) = 1.
      typedef unsigned long __attribute__((may_alias, aligned(1))) unaligned_aliasing_ulong;



      A portable way to express an aliasing load in ISO is with memcpy, which modern compilers do know how to inline as a single load instruction. e.g.



       unsigned long longword;
      memcpy(&longword, char_ptr, sizeof(longword));
      char_ptr += sizeof(longword);


      This also works for unaligned loads because memcpy works as-if by char-at-a-time access. But in practice modern compilers understand memcpy very well.



      The danger here is that if GCC doesn't know for sure that char_ptr is word-aligned, it won't inline it on some platforms that might not support unaligned loads in asm. e.g. MIPS before MIPS64r6, or older ARM. If you got an actual function call to memcpy just to load a word (and leave it in other memory), that would be a disaster. GCC can sometimes see when code aligns a pointer. Or after the char-at-a-time loop that reaches a ulong boundary you could use
      p = __builtin_assume_aligned(p, sizeof(unsigned long));



      This doesn't avoid the read-past-the-object possible UB, but with current GCC that's not dangerous in practice.




      Why hand-optimized C source is necessary: current compilers aren't good enough



      Hand-optimized asm can be even better when you want every last drop of performance for a widely-used standard library function. Especially for something like memcpy, but also strlen. In this case it wouldn't be much easier to use C with x86 intrinsics to take advantage of SSE2.



      But here we're just talking about a naive vs. bithack C version without any ISA-specific features.



      (I think we can take it as a given that strlen is widely enough used that making it run as fast as possible is important. So the question becomes whether we can get efficient machine code from simpler source. No, we can't.)



      Current GCC and clang are not capable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. (e.g. it has to be possible to check if the loop will run at least 16 iterations before running the first iteration.) e.g. autovectorizing memcpy is possible (explicit-length buffer) but not strcpy or strlen (implicit-length string), given current compilers.



      That includes search loops, or any other loop with a data-dependent if()break as well as a counter.



      ICC (Intel's compiler for x86) can auto-vectorize some search loops, but still only makes naive byte-at-a-time asm for a simple / naive C strlen like OpenBSD's libc uses. (Godbolt). (From @Peske's answer).



      A hand-optimized libc strlen is necessary for performance with current compilers. Going 1 byte at a time (with unrolling maybe 2 bytes per cycle on wide superscalar CPUs) is pathetic when main memory can keep up with about 8 bytes per cycle, and L1d cache can deliver 16 to 64 per cycle. (2x 32-byte loads per cycle on modern mainstream x86 CPUs since Haswell and Ryzen. Not counting AVX512 which can reduce clock speeds just for using 512-bit vectors; which is why glibc probably isn't in a hurry to add an AVX512 version. Although with 256-bit vectors, AVX512VL + BW masked compare into a mask and ktest or kortest could make strlen more hyperthreading friendly by reducing its uops / iteration.)



      I'm including non-x86 here, that's the "16 bytes". e.g. most AArch64 CPUs can do at least that, I think, and some certainly more. And some have enough execution throughput for strlen to keep up with that load bandwidth.



      Of course programs that work with large strings should usually keep track of lengths to avoid having to redo finding the length of implicit-length C strings very often. But short to medium length performance still benefits from hand-written implementations, and I'm sure some programs do end up using strlen on medium-length strings.






      share|improve this answer






















      • 5





        A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

        – zwol
        14 hours ago











      • @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

        – Shared
        9 hours ago






      • 1





        Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

        – Konrad Borowski
        7 hours ago











      • @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

        – Peter Cordes
        2 hours ago











      • @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

        – Peter Cordes
        57 mins ago
















      62















      There's been a lot of (slightly or entirely) wrong guesses in comments about some details / background for this.



      You're looking at glibc's optimized C fallback optimized implementation. (For ISAs that don't have a hand-written asm implementation). Or an old version of that code, which is still in the glibc source tree. https://code.woboq.org/userspace/glibc/string/strlen.c.html is a code-browser based on the current glibc git tree. Apparently it is still used by a few mainstream glibc targets, including MIPS. (Thanks @zwol).



      On popular ISAs like x86 and ARM, glibc uses hand-written asm



      So the incentive to change anything about this code is lower than you might think.



      This bithack code (https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord) isn't what actually runs on your server/desktop/laptop/smartphone. It's better than a naive byte-at-a-time loop, but even this bithack is pretty bad compared to efficient asm for modern CPUs (especially x86 where AVX2 SIMD allows checking 32 bytes with a couple instructions, allowing 32 to 64 bytes per clock cycle in the main loop if data is hot in L1d cache on modern CPUs with 2/clock vector load and ALU throughput. i.e. for medium-sized strings where startup overhead doesn't dominate.)



      glibc uses dynamic linking tricks to resolve strlen to an optimal version for your CPU, so even within x86 there's an SSE2 version (16-byte vectors, baseline for x86-64) and an AVX2 version (32-byte vectors).



      x86 has efficient data transfer between vector and general-purpose registers, which makes it uniquely(?) good for using SIMD to speed up functions on implicit-length strings where the loop control is data dependent. pcmpeqb / pmovmskb makes it possible to testing 16 separate bytes at a time.



      glibc has an AArch64 version like that using AdvSIMD, and a version for AArch64 CPUs where vector->GP registers stalls the pipeline, so it does actually use this bithack. But uses count-leading-zeros to find the byte-within-register once it gets a hit, and takes advantage of AArch64's efficient unaligned accesses after checking for page-crossing.



      Also related: Why is this code 6.5x slower with optimizations enabled? has some more details about what's fast vs. slow in x86 asm for strlen with a large buffer and a simple asm implementation that might be good for gcc to know how to inline. (Some gcc versions unwisely inline rep scasb which is very slow, or a 4-byte-at-a-time bithack like this. So GCC's inline-strlen recipe needs updating or disabling.)



      Asm doesn't have C-style "undefined behaviour"; it's safe to access bytes in memory however you like, and an aligned load that includes any valid bytes can't fault. Memory protection happens with aligned-page granularity; aligned accesses narrower than that can't cross a page boundary. Is it safe to read past the end of a buffer within the same page on x86 and x64? The same reasoning applies to the machine-code that this C hack gets compilers to create for a stand-alone non-inline implementation of this function.



      When a compiler emits code to call an unknown non-inline function, it has to assume that function modifies any/all global variables and any memory it might possibly have a pointer to. i.e. everything except locals that haven't had their address escape have to be in sync in memory across the call. This applies to functions written in asm, obviously, but also to library functions. If you don't enable link-time optimization, it even applies to separate translation units (source files).




      Why this is safe as part of glibc but not otherwise.



      The most important factor is that this strlen can't inline into anything else. It's not safe for that; it contains strict-aliasing UB (reading char data through an unsigned long*). char* is allowed to alias anything else but the reverse is not true.



      This is a library function for an ahead-of-time compiled library (glibc). It won't get inlined with link-time-optimization into callers. This means it just has to compile to safe machine code for a stand-alone version of strlen. It doesn't have to be portable / safe C.



      The GNU C library only has to compile with GCC. Apparently it's not supported to compile it with clang or ICC, even though they support GNU extensions. GCC is an ahead-of-time compilers that turn a C source file into an object file of machine code. Not an interpreter, so unless it inlines at compile time, bytes in memory are just bytes in memory. i.e. strict-aliasing UB isn't dangerous when the accesses with different types happen in different functions that don't inline into each other.



      Remember that strlen's behaviour is defined by the ISO C standard. That function name specifically is part of the implementation. Compilers like GCC even treat the name as a built-in function unless you use -fno-builtin-strlen, so strlen("foo") can be a compile-time constant 3. The definition in the library is only used when gcc decides to actually emit a call to it instead of inlining its own recipe or something.



      When UB isn't visible to the compiler at compile time, you get sane machine code. The machine code has to work for the no-UB case, and even if you wanted to, there's no way for the asm to detect what types the caller used to put data into the pointed-to memory.



      Glibc is compiled to a stand-alone static or dynamic library that can't inline with link-time optimization. glibc's build scripts don't create "fat" static libraries containing machine code + gcc GIMPLE internal representation for link-time optimization when inlining into a program. (i.e. libc.a won't participate in -flto link-time optimization into the main program.) Building glibc that way would be potentially unsafe on targets that actually use this .c.



      In fact as @zwol comments, LTO can't be used when building glibc itself, because of "brittle" code like this which could break if inlining between glibc source files was possible. (There are some internal uses of strlen, e.g. maybe as part of the printf implementation)




      This strlen makes some assumptions:




      • CHAR_BIT is a multiple of 8. True on all GNU systems. POSIX 2001 even guarantees CHAR_BIT == 8. (This looks safe for systems with CHAR_BIT= 16 or 32, like some DSPs; the unaligned-prologue loop will always run 0 iterations if sizeof(long) = sizeof(char) = 1 because every pointer is always aligned and p & sizeof(long)-1 is always zero.) But if you had a non-ASCII character set where chars are 7 bits wide, 0x8080... is the wrong pattern.

      • (maybe) unsigned long is 4 or 8 bytes. Or maybe it would actually work for any size of unsigned long up to 8, and it uses an assert() to check for that.

      Those two aren't possible UB, they're just non-portability to some C implementations. This code is (or was) part of the C implementation on platforms where it does work, so that's fine.



      The next assumption is potential C UB:




      • An aligned load that contains any valid bytes can't fault, and is safe as long as you ignore the bytes outside the object you actually want. (True in asm on every GNU systems, and on all normal CPUs because memory protection happens with aligned-page granularity. Is it safe to read past the end of a buffer within the same page on x86 and x64? safe in C when the UB isn't visible at compile time. Without inlining, this is the case here. The compiler can't prove that reading past the first 0 is UB; it could be a C char[] array containing 1,2,0,3 for example)

      That last point is what makes it safe to read past the end of a C object here. That is pretty much safe even when inlining with current compilers because I think they don't currently treat that implying a path of execution is unreachable. But anyway, the strict aliasing is already a showstopper if you ever let this inline.



      Then you'd have problems like the Linux kernel's old unsafe memcpy CPP macro that used pointer-casting to unsigned long (gcc, strict-aliasing, and horror stories).



      This strlen dates back to the era when you could get away with stuff like that in general; it used to be pretty much safe without the "only when not inlining" caveat before GCC3.




      UB that's only visible when looking across call/ret boundaries can't hurt us. (e.g. calling this on a char buf[] instead of on an array of unsigned long[] cast to a const char*). Once the machine code is set in stone, it's just dealing with bytes in memory. A non-inline function call has to assume that the callee reads any/all memory.




      Writing this safely, without strict-aliasing UB



      The GCC type attribute may_alias gives a type the same alias-anything treatment as char*. (Suggested by @KonradBorowsk). GCC headers currently use it for x86 SIMD vector types like __m128i so you can always safely do _mm_loadu_si128( (__m128i*)foo ). (See Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior? for more details about what this does and doesn't mean.)



      strlen(const char *char_ptr)

      typedef unsigned long __attribute__((may_alias)) aliasing_ulong;

      aliasing_ulong *longword_ptr = (aliasing_ulong *)char_ptr;
      for (;;)
      unsigned long ulong = *longword_ptr++; // can safely alias anything
      ...




      You could also use aligned(1) to express a type with alignof(T) = 1.
      typedef unsigned long __attribute__((may_alias, aligned(1))) unaligned_aliasing_ulong;



      A portable way to express an aliasing load in ISO is with memcpy, which modern compilers do know how to inline as a single load instruction. e.g.



       unsigned long longword;
      memcpy(&longword, char_ptr, sizeof(longword));
      char_ptr += sizeof(longword);


      This also works for unaligned loads because memcpy works as-if by char-at-a-time access. But in practice modern compilers understand memcpy very well.



      The danger here is that if GCC doesn't know for sure that char_ptr is word-aligned, it won't inline it on some platforms that might not support unaligned loads in asm. e.g. MIPS before MIPS64r6, or older ARM. If you got an actual function call to memcpy just to load a word (and leave it in other memory), that would be a disaster. GCC can sometimes see when code aligns a pointer. Or after the char-at-a-time loop that reaches a ulong boundary you could use
      p = __builtin_assume_aligned(p, sizeof(unsigned long));



      This doesn't avoid the read-past-the-object possible UB, but with current GCC that's not dangerous in practice.




      Why hand-optimized C source is necessary: current compilers aren't good enough



      Hand-optimized asm can be even better when you want every last drop of performance for a widely-used standard library function. Especially for something like memcpy, but also strlen. In this case it wouldn't be much easier to use C with x86 intrinsics to take advantage of SSE2.



      But here we're just talking about a naive vs. bithack C version without any ISA-specific features.



      (I think we can take it as a given that strlen is widely enough used that making it run as fast as possible is important. So the question becomes whether we can get efficient machine code from simpler source. No, we can't.)



      Current GCC and clang are not capable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. (e.g. it has to be possible to check if the loop will run at least 16 iterations before running the first iteration.) e.g. autovectorizing memcpy is possible (explicit-length buffer) but not strcpy or strlen (implicit-length string), given current compilers.



      That includes search loops, or any other loop with a data-dependent if()break as well as a counter.



      ICC (Intel's compiler for x86) can auto-vectorize some search loops, but still only makes naive byte-at-a-time asm for a simple / naive C strlen like OpenBSD's libc uses. (Godbolt). (From @Peske's answer).



      A hand-optimized libc strlen is necessary for performance with current compilers. Going 1 byte at a time (with unrolling maybe 2 bytes per cycle on wide superscalar CPUs) is pathetic when main memory can keep up with about 8 bytes per cycle, and L1d cache can deliver 16 to 64 per cycle. (2x 32-byte loads per cycle on modern mainstream x86 CPUs since Haswell and Ryzen. Not counting AVX512 which can reduce clock speeds just for using 512-bit vectors; which is why glibc probably isn't in a hurry to add an AVX512 version. Although with 256-bit vectors, AVX512VL + BW masked compare into a mask and ktest or kortest could make strlen more hyperthreading friendly by reducing its uops / iteration.)



      I'm including non-x86 here, that's the "16 bytes". e.g. most AArch64 CPUs can do at least that, I think, and some certainly more. And some have enough execution throughput for strlen to keep up with that load bandwidth.



      Of course programs that work with large strings should usually keep track of lengths to avoid having to redo finding the length of implicit-length C strings very often. But short to medium length performance still benefits from hand-written implementations, and I'm sure some programs do end up using strlen on medium-length strings.






      share|improve this answer






















      • 5





        A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

        – zwol
        14 hours ago











      • @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

        – Shared
        9 hours ago






      • 1





        Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

        – Konrad Borowski
        7 hours ago











      • @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

        – Peter Cordes
        2 hours ago











      • @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

        – Peter Cordes
        57 mins ago














      62














      62










      62









      There's been a lot of (slightly or entirely) wrong guesses in comments about some details / background for this.



      You're looking at glibc's optimized C fallback optimized implementation. (For ISAs that don't have a hand-written asm implementation). Or an old version of that code, which is still in the glibc source tree. https://code.woboq.org/userspace/glibc/string/strlen.c.html is a code-browser based on the current glibc git tree. Apparently it is still used by a few mainstream glibc targets, including MIPS. (Thanks @zwol).



      On popular ISAs like x86 and ARM, glibc uses hand-written asm



      So the incentive to change anything about this code is lower than you might think.



      This bithack code (https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord) isn't what actually runs on your server/desktop/laptop/smartphone. It's better than a naive byte-at-a-time loop, but even this bithack is pretty bad compared to efficient asm for modern CPUs (especially x86 where AVX2 SIMD allows checking 32 bytes with a couple instructions, allowing 32 to 64 bytes per clock cycle in the main loop if data is hot in L1d cache on modern CPUs with 2/clock vector load and ALU throughput. i.e. for medium-sized strings where startup overhead doesn't dominate.)



      glibc uses dynamic linking tricks to resolve strlen to an optimal version for your CPU, so even within x86 there's an SSE2 version (16-byte vectors, baseline for x86-64) and an AVX2 version (32-byte vectors).



      x86 has efficient data transfer between vector and general-purpose registers, which makes it uniquely(?) good for using SIMD to speed up functions on implicit-length strings where the loop control is data dependent. pcmpeqb / pmovmskb makes it possible to testing 16 separate bytes at a time.



      glibc has an AArch64 version like that using AdvSIMD, and a version for AArch64 CPUs where vector->GP registers stalls the pipeline, so it does actually use this bithack. But uses count-leading-zeros to find the byte-within-register once it gets a hit, and takes advantage of AArch64's efficient unaligned accesses after checking for page-crossing.



      Also related: Why is this code 6.5x slower with optimizations enabled? has some more details about what's fast vs. slow in x86 asm for strlen with a large buffer and a simple asm implementation that might be good for gcc to know how to inline. (Some gcc versions unwisely inline rep scasb which is very slow, or a 4-byte-at-a-time bithack like this. So GCC's inline-strlen recipe needs updating or disabling.)



      Asm doesn't have C-style "undefined behaviour"; it's safe to access bytes in memory however you like, and an aligned load that includes any valid bytes can't fault. Memory protection happens with aligned-page granularity; aligned accesses narrower than that can't cross a page boundary. Is it safe to read past the end of a buffer within the same page on x86 and x64? The same reasoning applies to the machine-code that this C hack gets compilers to create for a stand-alone non-inline implementation of this function.



      When a compiler emits code to call an unknown non-inline function, it has to assume that function modifies any/all global variables and any memory it might possibly have a pointer to. i.e. everything except locals that haven't had their address escape have to be in sync in memory across the call. This applies to functions written in asm, obviously, but also to library functions. If you don't enable link-time optimization, it even applies to separate translation units (source files).




      Why this is safe as part of glibc but not otherwise.



      The most important factor is that this strlen can't inline into anything else. It's not safe for that; it contains strict-aliasing UB (reading char data through an unsigned long*). char* is allowed to alias anything else but the reverse is not true.



      This is a library function for an ahead-of-time compiled library (glibc). It won't get inlined with link-time-optimization into callers. This means it just has to compile to safe machine code for a stand-alone version of strlen. It doesn't have to be portable / safe C.



      The GNU C library only has to compile with GCC. Apparently it's not supported to compile it with clang or ICC, even though they support GNU extensions. GCC is an ahead-of-time compilers that turn a C source file into an object file of machine code. Not an interpreter, so unless it inlines at compile time, bytes in memory are just bytes in memory. i.e. strict-aliasing UB isn't dangerous when the accesses with different types happen in different functions that don't inline into each other.



      Remember that strlen's behaviour is defined by the ISO C standard. That function name specifically is part of the implementation. Compilers like GCC even treat the name as a built-in function unless you use -fno-builtin-strlen, so strlen("foo") can be a compile-time constant 3. The definition in the library is only used when gcc decides to actually emit a call to it instead of inlining its own recipe or something.



      When UB isn't visible to the compiler at compile time, you get sane machine code. The machine code has to work for the no-UB case, and even if you wanted to, there's no way for the asm to detect what types the caller used to put data into the pointed-to memory.



      Glibc is compiled to a stand-alone static or dynamic library that can't inline with link-time optimization. glibc's build scripts don't create "fat" static libraries containing machine code + gcc GIMPLE internal representation for link-time optimization when inlining into a program. (i.e. libc.a won't participate in -flto link-time optimization into the main program.) Building glibc that way would be potentially unsafe on targets that actually use this .c.



      In fact as @zwol comments, LTO can't be used when building glibc itself, because of "brittle" code like this which could break if inlining between glibc source files was possible. (There are some internal uses of strlen, e.g. maybe as part of the printf implementation)




      This strlen makes some assumptions:




      • CHAR_BIT is a multiple of 8. True on all GNU systems. POSIX 2001 even guarantees CHAR_BIT == 8. (This looks safe for systems with CHAR_BIT= 16 or 32, like some DSPs; the unaligned-prologue loop will always run 0 iterations if sizeof(long) = sizeof(char) = 1 because every pointer is always aligned and p & sizeof(long)-1 is always zero.) But if you had a non-ASCII character set where chars are 7 bits wide, 0x8080... is the wrong pattern.

      • (maybe) unsigned long is 4 or 8 bytes. Or maybe it would actually work for any size of unsigned long up to 8, and it uses an assert() to check for that.

      Those two aren't possible UB, they're just non-portability to some C implementations. This code is (or was) part of the C implementation on platforms where it does work, so that's fine.



      The next assumption is potential C UB:




      • An aligned load that contains any valid bytes can't fault, and is safe as long as you ignore the bytes outside the object you actually want. (True in asm on every GNU systems, and on all normal CPUs because memory protection happens with aligned-page granularity. Is it safe to read past the end of a buffer within the same page on x86 and x64? safe in C when the UB isn't visible at compile time. Without inlining, this is the case here. The compiler can't prove that reading past the first 0 is UB; it could be a C char[] array containing 1,2,0,3 for example)

      That last point is what makes it safe to read past the end of a C object here. That is pretty much safe even when inlining with current compilers because I think they don't currently treat that implying a path of execution is unreachable. But anyway, the strict aliasing is already a showstopper if you ever let this inline.



      Then you'd have problems like the Linux kernel's old unsafe memcpy CPP macro that used pointer-casting to unsigned long (gcc, strict-aliasing, and horror stories).



      This strlen dates back to the era when you could get away with stuff like that in general; it used to be pretty much safe without the "only when not inlining" caveat before GCC3.




      UB that's only visible when looking across call/ret boundaries can't hurt us. (e.g. calling this on a char buf[] instead of on an array of unsigned long[] cast to a const char*). Once the machine code is set in stone, it's just dealing with bytes in memory. A non-inline function call has to assume that the callee reads any/all memory.




      Writing this safely, without strict-aliasing UB



      The GCC type attribute may_alias gives a type the same alias-anything treatment as char*. (Suggested by @KonradBorowsk). GCC headers currently use it for x86 SIMD vector types like __m128i so you can always safely do _mm_loadu_si128( (__m128i*)foo ). (See Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior? for more details about what this does and doesn't mean.)



      strlen(const char *char_ptr)

      typedef unsigned long __attribute__((may_alias)) aliasing_ulong;

      aliasing_ulong *longword_ptr = (aliasing_ulong *)char_ptr;
      for (;;)
      unsigned long ulong = *longword_ptr++; // can safely alias anything
      ...




      You could also use aligned(1) to express a type with alignof(T) = 1.
      typedef unsigned long __attribute__((may_alias, aligned(1))) unaligned_aliasing_ulong;



      A portable way to express an aliasing load in ISO is with memcpy, which modern compilers do know how to inline as a single load instruction. e.g.



       unsigned long longword;
      memcpy(&longword, char_ptr, sizeof(longword));
      char_ptr += sizeof(longword);


      This also works for unaligned loads because memcpy works as-if by char-at-a-time access. But in practice modern compilers understand memcpy very well.



      The danger here is that if GCC doesn't know for sure that char_ptr is word-aligned, it won't inline it on some platforms that might not support unaligned loads in asm. e.g. MIPS before MIPS64r6, or older ARM. If you got an actual function call to memcpy just to load a word (and leave it in other memory), that would be a disaster. GCC can sometimes see when code aligns a pointer. Or after the char-at-a-time loop that reaches a ulong boundary you could use
      p = __builtin_assume_aligned(p, sizeof(unsigned long));



      This doesn't avoid the read-past-the-object possible UB, but with current GCC that's not dangerous in practice.




      Why hand-optimized C source is necessary: current compilers aren't good enough



      Hand-optimized asm can be even better when you want every last drop of performance for a widely-used standard library function. Especially for something like memcpy, but also strlen. In this case it wouldn't be much easier to use C with x86 intrinsics to take advantage of SSE2.



      But here we're just talking about a naive vs. bithack C version without any ISA-specific features.



      (I think we can take it as a given that strlen is widely enough used that making it run as fast as possible is important. So the question becomes whether we can get efficient machine code from simpler source. No, we can't.)



      Current GCC and clang are not capable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. (e.g. it has to be possible to check if the loop will run at least 16 iterations before running the first iteration.) e.g. autovectorizing memcpy is possible (explicit-length buffer) but not strcpy or strlen (implicit-length string), given current compilers.



      That includes search loops, or any other loop with a data-dependent if()break as well as a counter.



      ICC (Intel's compiler for x86) can auto-vectorize some search loops, but still only makes naive byte-at-a-time asm for a simple / naive C strlen like OpenBSD's libc uses. (Godbolt). (From @Peske's answer).



      A hand-optimized libc strlen is necessary for performance with current compilers. Going 1 byte at a time (with unrolling maybe 2 bytes per cycle on wide superscalar CPUs) is pathetic when main memory can keep up with about 8 bytes per cycle, and L1d cache can deliver 16 to 64 per cycle. (2x 32-byte loads per cycle on modern mainstream x86 CPUs since Haswell and Ryzen. Not counting AVX512 which can reduce clock speeds just for using 512-bit vectors; which is why glibc probably isn't in a hurry to add an AVX512 version. Although with 256-bit vectors, AVX512VL + BW masked compare into a mask and ktest or kortest could make strlen more hyperthreading friendly by reducing its uops / iteration.)



      I'm including non-x86 here, that's the "16 bytes". e.g. most AArch64 CPUs can do at least that, I think, and some certainly more. And some have enough execution throughput for strlen to keep up with that load bandwidth.



      Of course programs that work with large strings should usually keep track of lengths to avoid having to redo finding the length of implicit-length C strings very often. But short to medium length performance still benefits from hand-written implementations, and I'm sure some programs do end up using strlen on medium-length strings.






      share|improve this answer















      There's been a lot of (slightly or entirely) wrong guesses in comments about some details / background for this.



      You're looking at glibc's optimized C fallback optimized implementation. (For ISAs that don't have a hand-written asm implementation). Or an old version of that code, which is still in the glibc source tree. https://code.woboq.org/userspace/glibc/string/strlen.c.html is a code-browser based on the current glibc git tree. Apparently it is still used by a few mainstream glibc targets, including MIPS. (Thanks @zwol).



      On popular ISAs like x86 and ARM, glibc uses hand-written asm



      So the incentive to change anything about this code is lower than you might think.



      This bithack code (https://graphics.stanford.edu/~seander/bithacks.html#ZeroInWord) isn't what actually runs on your server/desktop/laptop/smartphone. It's better than a naive byte-at-a-time loop, but even this bithack is pretty bad compared to efficient asm for modern CPUs (especially x86 where AVX2 SIMD allows checking 32 bytes with a couple instructions, allowing 32 to 64 bytes per clock cycle in the main loop if data is hot in L1d cache on modern CPUs with 2/clock vector load and ALU throughput. i.e. for medium-sized strings where startup overhead doesn't dominate.)



      glibc uses dynamic linking tricks to resolve strlen to an optimal version for your CPU, so even within x86 there's an SSE2 version (16-byte vectors, baseline for x86-64) and an AVX2 version (32-byte vectors).



      x86 has efficient data transfer between vector and general-purpose registers, which makes it uniquely(?) good for using SIMD to speed up functions on implicit-length strings where the loop control is data dependent. pcmpeqb / pmovmskb makes it possible to testing 16 separate bytes at a time.



      glibc has an AArch64 version like that using AdvSIMD, and a version for AArch64 CPUs where vector->GP registers stalls the pipeline, so it does actually use this bithack. But uses count-leading-zeros to find the byte-within-register once it gets a hit, and takes advantage of AArch64's efficient unaligned accesses after checking for page-crossing.



      Also related: Why is this code 6.5x slower with optimizations enabled? has some more details about what's fast vs. slow in x86 asm for strlen with a large buffer and a simple asm implementation that might be good for gcc to know how to inline. (Some gcc versions unwisely inline rep scasb which is very slow, or a 4-byte-at-a-time bithack like this. So GCC's inline-strlen recipe needs updating or disabling.)



      Asm doesn't have C-style "undefined behaviour"; it's safe to access bytes in memory however you like, and an aligned load that includes any valid bytes can't fault. Memory protection happens with aligned-page granularity; aligned accesses narrower than that can't cross a page boundary. Is it safe to read past the end of a buffer within the same page on x86 and x64? The same reasoning applies to the machine-code that this C hack gets compilers to create for a stand-alone non-inline implementation of this function.



      When a compiler emits code to call an unknown non-inline function, it has to assume that function modifies any/all global variables and any memory it might possibly have a pointer to. i.e. everything except locals that haven't had their address escape have to be in sync in memory across the call. This applies to functions written in asm, obviously, but also to library functions. If you don't enable link-time optimization, it even applies to separate translation units (source files).




      Why this is safe as part of glibc but not otherwise.



      The most important factor is that this strlen can't inline into anything else. It's not safe for that; it contains strict-aliasing UB (reading char data through an unsigned long*). char* is allowed to alias anything else but the reverse is not true.



      This is a library function for an ahead-of-time compiled library (glibc). It won't get inlined with link-time-optimization into callers. This means it just has to compile to safe machine code for a stand-alone version of strlen. It doesn't have to be portable / safe C.



      The GNU C library only has to compile with GCC. Apparently it's not supported to compile it with clang or ICC, even though they support GNU extensions. GCC is an ahead-of-time compilers that turn a C source file into an object file of machine code. Not an interpreter, so unless it inlines at compile time, bytes in memory are just bytes in memory. i.e. strict-aliasing UB isn't dangerous when the accesses with different types happen in different functions that don't inline into each other.



      Remember that strlen's behaviour is defined by the ISO C standard. That function name specifically is part of the implementation. Compilers like GCC even treat the name as a built-in function unless you use -fno-builtin-strlen, so strlen("foo") can be a compile-time constant 3. The definition in the library is only used when gcc decides to actually emit a call to it instead of inlining its own recipe or something.



      When UB isn't visible to the compiler at compile time, you get sane machine code. The machine code has to work for the no-UB case, and even if you wanted to, there's no way for the asm to detect what types the caller used to put data into the pointed-to memory.



      Glibc is compiled to a stand-alone static or dynamic library that can't inline with link-time optimization. glibc's build scripts don't create "fat" static libraries containing machine code + gcc GIMPLE internal representation for link-time optimization when inlining into a program. (i.e. libc.a won't participate in -flto link-time optimization into the main program.) Building glibc that way would be potentially unsafe on targets that actually use this .c.



      In fact as @zwol comments, LTO can't be used when building glibc itself, because of "brittle" code like this which could break if inlining between glibc source files was possible. (There are some internal uses of strlen, e.g. maybe as part of the printf implementation)




      This strlen makes some assumptions:




      • CHAR_BIT is a multiple of 8. True on all GNU systems. POSIX 2001 even guarantees CHAR_BIT == 8. (This looks safe for systems with CHAR_BIT= 16 or 32, like some DSPs; the unaligned-prologue loop will always run 0 iterations if sizeof(long) = sizeof(char) = 1 because every pointer is always aligned and p & sizeof(long)-1 is always zero.) But if you had a non-ASCII character set where chars are 7 bits wide, 0x8080... is the wrong pattern.

      • (maybe) unsigned long is 4 or 8 bytes. Or maybe it would actually work for any size of unsigned long up to 8, and it uses an assert() to check for that.

      Those two aren't possible UB, they're just non-portability to some C implementations. This code is (or was) part of the C implementation on platforms where it does work, so that's fine.



      The next assumption is potential C UB:




      • An aligned load that contains any valid bytes can't fault, and is safe as long as you ignore the bytes outside the object you actually want. (True in asm on every GNU systems, and on all normal CPUs because memory protection happens with aligned-page granularity. Is it safe to read past the end of a buffer within the same page on x86 and x64? safe in C when the UB isn't visible at compile time. Without inlining, this is the case here. The compiler can't prove that reading past the first 0 is UB; it could be a C char[] array containing 1,2,0,3 for example)

      That last point is what makes it safe to read past the end of a C object here. That is pretty much safe even when inlining with current compilers because I think they don't currently treat that implying a path of execution is unreachable. But anyway, the strict aliasing is already a showstopper if you ever let this inline.



      Then you'd have problems like the Linux kernel's old unsafe memcpy CPP macro that used pointer-casting to unsigned long (gcc, strict-aliasing, and horror stories).



      This strlen dates back to the era when you could get away with stuff like that in general; it used to be pretty much safe without the "only when not inlining" caveat before GCC3.




      UB that's only visible when looking across call/ret boundaries can't hurt us. (e.g. calling this on a char buf[] instead of on an array of unsigned long[] cast to a const char*). Once the machine code is set in stone, it's just dealing with bytes in memory. A non-inline function call has to assume that the callee reads any/all memory.




      Writing this safely, without strict-aliasing UB



      The GCC type attribute may_alias gives a type the same alias-anything treatment as char*. (Suggested by @KonradBorowsk). GCC headers currently use it for x86 SIMD vector types like __m128i so you can always safely do _mm_loadu_si128( (__m128i*)foo ). (See Is `reinterpret_cast`ing between hardware vector pointer and the corresponding type an undefined behavior? for more details about what this does and doesn't mean.)



      strlen(const char *char_ptr)

      typedef unsigned long __attribute__((may_alias)) aliasing_ulong;

      aliasing_ulong *longword_ptr = (aliasing_ulong *)char_ptr;
      for (;;)
      unsigned long ulong = *longword_ptr++; // can safely alias anything
      ...




      You could also use aligned(1) to express a type with alignof(T) = 1.
      typedef unsigned long __attribute__((may_alias, aligned(1))) unaligned_aliasing_ulong;



      A portable way to express an aliasing load in ISO is with memcpy, which modern compilers do know how to inline as a single load instruction. e.g.



       unsigned long longword;
      memcpy(&longword, char_ptr, sizeof(longword));
      char_ptr += sizeof(longword);


      This also works for unaligned loads because memcpy works as-if by char-at-a-time access. But in practice modern compilers understand memcpy very well.



      The danger here is that if GCC doesn't know for sure that char_ptr is word-aligned, it won't inline it on some platforms that might not support unaligned loads in asm. e.g. MIPS before MIPS64r6, or older ARM. If you got an actual function call to memcpy just to load a word (and leave it in other memory), that would be a disaster. GCC can sometimes see when code aligns a pointer. Or after the char-at-a-time loop that reaches a ulong boundary you could use
      p = __builtin_assume_aligned(p, sizeof(unsigned long));



      This doesn't avoid the read-past-the-object possible UB, but with current GCC that's not dangerous in practice.




      Why hand-optimized C source is necessary: current compilers aren't good enough



      Hand-optimized asm can be even better when you want every last drop of performance for a widely-used standard library function. Especially for something like memcpy, but also strlen. In this case it wouldn't be much easier to use C with x86 intrinsics to take advantage of SSE2.



      But here we're just talking about a naive vs. bithack C version without any ISA-specific features.



      (I think we can take it as a given that strlen is widely enough used that making it run as fast as possible is important. So the question becomes whether we can get efficient machine code from simpler source. No, we can't.)



      Current GCC and clang are not capable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. (e.g. it has to be possible to check if the loop will run at least 16 iterations before running the first iteration.) e.g. autovectorizing memcpy is possible (explicit-length buffer) but not strcpy or strlen (implicit-length string), given current compilers.



      That includes search loops, or any other loop with a data-dependent if()break as well as a counter.



      ICC (Intel's compiler for x86) can auto-vectorize some search loops, but still only makes naive byte-at-a-time asm for a simple / naive C strlen like OpenBSD's libc uses. (Godbolt). (From @Peske's answer).



      A hand-optimized libc strlen is necessary for performance with current compilers. Going 1 byte at a time (with unrolling maybe 2 bytes per cycle on wide superscalar CPUs) is pathetic when main memory can keep up with about 8 bytes per cycle, and L1d cache can deliver 16 to 64 per cycle. (2x 32-byte loads per cycle on modern mainstream x86 CPUs since Haswell and Ryzen. Not counting AVX512 which can reduce clock speeds just for using 512-bit vectors; which is why glibc probably isn't in a hurry to add an AVX512 version. Although with 256-bit vectors, AVX512VL + BW masked compare into a mask and ktest or kortest could make strlen more hyperthreading friendly by reducing its uops / iteration.)



      I'm including non-x86 here, that's the "16 bytes". e.g. most AArch64 CPUs can do at least that, I think, and some certainly more. And some have enough execution throughput for strlen to keep up with that load bandwidth.



      Of course programs that work with large strings should usually keep track of lengths to avoid having to redo finding the length of implicit-length C strings very often. But short to medium length performance still benefits from hand-written implementations, and I'm sure some programs do end up using strlen on medium-length strings.







      share|improve this answer














      share|improve this answer



      share|improve this answer








      edited 58 mins ago

























      answered 16 hours ago









      Peter CordesPeter Cordes

      156k22 gold badges249 silver badges399 bronze badges




      156k22 gold badges249 silver badges399 bronze badges










      • 5





        A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

        – zwol
        14 hours ago











      • @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

        – Shared
        9 hours ago






      • 1





        Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

        – Konrad Borowski
        7 hours ago











      • @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

        – Peter Cordes
        2 hours ago











      • @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

        – Peter Cordes
        57 mins ago













      • 5





        A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

        – zwol
        14 hours ago











      • @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

        – Shared
        9 hours ago






      • 1





        Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

        – Konrad Borowski
        7 hours ago











      • @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

        – Peter Cordes
        2 hours ago











      • @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

        – Peter Cordes
        57 mins ago








      5




      5





      A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

      – zwol
      14 hours ago





      A few notes: (1) It is not currently possible to compile glibc itself with any compiler other than GCC. (2) It is not currently possible to compile glibc itself with link-time optimizations enabled, because of precisely these sorts of cases, where the compiler will see UB if inlining is allowed to happen. (3) CHAR_BIT == 8 is a POSIX requirement (as of the -2001 rev; see here). (4) The C fallback implementation of strlen is used for some supported CPUs, I believe the most common one is MIPS.

      – zwol
      14 hours ago













      @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

      – Shared
      9 hours ago





      @PeterCordes thanks so much for this answer, this really helps me understand some of the things in this question. Hopefully it's helpful to others as well!

      – Shared
      9 hours ago




      1




      1





      Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

      – Konrad Borowski
      7 hours ago





      Interestingly, the strict-aliasing UB could be fixed by making use of __attribute__((__may_alias__)) attribute (this is non-portable, but it should be fine for glibc).

      – Konrad Borowski
      7 hours ago













      @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

      – Peter Cordes
      2 hours ago





      @KonradBorowski: oh great point, I hadn't thought of using that attribute without also vector_size(16), the way __m128i does. The other way to express it is memcpy(&my_long, src, sizeof(my_long)), which is also safe for unaligned loads. GCC does know how to inline that as as single load instruction.

      – Peter Cordes
      2 hours ago













      @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

      – Peter Cordes
      57 mins ago






      @zwol: Konrad's comment may show us a way to make functions like this safe-ish for inlining with may_alias type attributes. Updated my answer with a section on that. Thanks for the fact-checks :)

      – Peter Cordes
      57 mins ago












      54















      It is explained in the comments in the file you linked:



       27 /* Return the length of the null-terminated string STR. Scan for
      28 the null terminator quickly by testing four bytes at a time. */


      and:



       73 /* Instead of the traditional loop which tests each character,
      74 we will test a longword at a time. The tricky part is testing
      75 if *any of the four* bytes in the longword in question are zero. */


      In C, it is possible to reason in detail about the efficiency.



      It is less efficient to iterate through individual characters looking for a null than it is to test more than one byte at a time, as this code does.



      The additional complexity comes from needing to ensure that the string under test is aligned in the right place to start testing more than one byte at a time (along a longword boundary, as described in the comments), and from needing to ensure that the assumptions about the sizes of the datatypes are not violated when the code is used.



      In most (but not all) modern software development, this attention to efficiency detail is not necessary, or not worth the cost of extra code complexity.



      One place where it does make sense to pay attention to efficiency like this is in standard libraries, like the example you linked.




      If you want to read more about word boundaries, see this question, and this excellent wikipedia page






      share|improve this answer































        54















        It is explained in the comments in the file you linked:



         27 /* Return the length of the null-terminated string STR. Scan for
        28 the null terminator quickly by testing four bytes at a time. */


        and:



         73 /* Instead of the traditional loop which tests each character,
        74 we will test a longword at a time. The tricky part is testing
        75 if *any of the four* bytes in the longword in question are zero. */


        In C, it is possible to reason in detail about the efficiency.



        It is less efficient to iterate through individual characters looking for a null than it is to test more than one byte at a time, as this code does.



        The additional complexity comes from needing to ensure that the string under test is aligned in the right place to start testing more than one byte at a time (along a longword boundary, as described in the comments), and from needing to ensure that the assumptions about the sizes of the datatypes are not violated when the code is used.



        In most (but not all) modern software development, this attention to efficiency detail is not necessary, or not worth the cost of extra code complexity.



        One place where it does make sense to pay attention to efficiency like this is in standard libraries, like the example you linked.




        If you want to read more about word boundaries, see this question, and this excellent wikipedia page






        share|improve this answer





























          54














          54










          54









          It is explained in the comments in the file you linked:



           27 /* Return the length of the null-terminated string STR. Scan for
          28 the null terminator quickly by testing four bytes at a time. */


          and:



           73 /* Instead of the traditional loop which tests each character,
          74 we will test a longword at a time. The tricky part is testing
          75 if *any of the four* bytes in the longword in question are zero. */


          In C, it is possible to reason in detail about the efficiency.



          It is less efficient to iterate through individual characters looking for a null than it is to test more than one byte at a time, as this code does.



          The additional complexity comes from needing to ensure that the string under test is aligned in the right place to start testing more than one byte at a time (along a longword boundary, as described in the comments), and from needing to ensure that the assumptions about the sizes of the datatypes are not violated when the code is used.



          In most (but not all) modern software development, this attention to efficiency detail is not necessary, or not worth the cost of extra code complexity.



          One place where it does make sense to pay attention to efficiency like this is in standard libraries, like the example you linked.




          If you want to read more about word boundaries, see this question, and this excellent wikipedia page






          share|improve this answer















          It is explained in the comments in the file you linked:



           27 /* Return the length of the null-terminated string STR. Scan for
          28 the null terminator quickly by testing four bytes at a time. */


          and:



           73 /* Instead of the traditional loop which tests each character,
          74 we will test a longword at a time. The tricky part is testing
          75 if *any of the four* bytes in the longword in question are zero. */


          In C, it is possible to reason in detail about the efficiency.



          It is less efficient to iterate through individual characters looking for a null than it is to test more than one byte at a time, as this code does.



          The additional complexity comes from needing to ensure that the string under test is aligned in the right place to start testing more than one byte at a time (along a longword boundary, as described in the comments), and from needing to ensure that the assumptions about the sizes of the datatypes are not violated when the code is used.



          In most (but not all) modern software development, this attention to efficiency detail is not necessary, or not worth the cost of extra code complexity.



          One place where it does make sense to pay attention to efficiency like this is in standard libraries, like the example you linked.




          If you want to read more about word boundaries, see this question, and this excellent wikipedia page







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 2 days ago

























          answered 2 days ago









          Timothy JonesTimothy Jones

          17.3k5 gold badges48 silver badges77 bronze badges




          17.3k5 gold badges48 silver badges77 bronze badges
























              29















              In addition to the great answers here, I want to point out that the code linked in the question is for GNU's implementation of strlen.



              The OpenBSD implementation of strlen is very similar to the code proposed in the question. The complexity of an implementation is determined by the author.



              ...
              #include <string.h>

              size_t
              strlen(const char *str)

              const char *s;

              for (s = str; *s; ++s)
              ;
              return (s - str);


              DEF_STRONG(strlen);



              EDIT: The OpenBSD code I linked above looks to be a fallback implementation for ISAs that don't have there own asm implementation. There are different implementations of strlen depending on architecture. The code for amd64 strlen, for example, is asm. Similar to PeterCordes' comments/answer pointing out that the non-fallback GNU implementations are asm as well.






              share|improve this answer










              New contributor



              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.
















              • 4





                That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

                – Jason
                yesterday






              • 10





                It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

                – Peter Cordes
                yesterday






              • 4





                Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

                – Antti Haapala
                yesterday






              • 1





                @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

                – Peter Cordes
                yesterday






              • 2





                @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

                – dchest
                14 hours ago
















              29















              In addition to the great answers here, I want to point out that the code linked in the question is for GNU's implementation of strlen.



              The OpenBSD implementation of strlen is very similar to the code proposed in the question. The complexity of an implementation is determined by the author.



              ...
              #include <string.h>

              size_t
              strlen(const char *str)

              const char *s;

              for (s = str; *s; ++s)
              ;
              return (s - str);


              DEF_STRONG(strlen);



              EDIT: The OpenBSD code I linked above looks to be a fallback implementation for ISAs that don't have there own asm implementation. There are different implementations of strlen depending on architecture. The code for amd64 strlen, for example, is asm. Similar to PeterCordes' comments/answer pointing out that the non-fallback GNU implementations are asm as well.






              share|improve this answer










              New contributor



              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.
















              • 4





                That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

                – Jason
                yesterday






              • 10





                It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

                – Peter Cordes
                yesterday






              • 4





                Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

                – Antti Haapala
                yesterday






              • 1





                @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

                – Peter Cordes
                yesterday






              • 2





                @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

                – dchest
                14 hours ago














              29














              29










              29









              In addition to the great answers here, I want to point out that the code linked in the question is for GNU's implementation of strlen.



              The OpenBSD implementation of strlen is very similar to the code proposed in the question. The complexity of an implementation is determined by the author.



              ...
              #include <string.h>

              size_t
              strlen(const char *str)

              const char *s;

              for (s = str; *s; ++s)
              ;
              return (s - str);


              DEF_STRONG(strlen);



              EDIT: The OpenBSD code I linked above looks to be a fallback implementation for ISAs that don't have there own asm implementation. There are different implementations of strlen depending on architecture. The code for amd64 strlen, for example, is asm. Similar to PeterCordes' comments/answer pointing out that the non-fallback GNU implementations are asm as well.






              share|improve this answer










              New contributor



              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.









              In addition to the great answers here, I want to point out that the code linked in the question is for GNU's implementation of strlen.



              The OpenBSD implementation of strlen is very similar to the code proposed in the question. The complexity of an implementation is determined by the author.



              ...
              #include <string.h>

              size_t
              strlen(const char *str)

              const char *s;

              for (s = str; *s; ++s)
              ;
              return (s - str);


              DEF_STRONG(strlen);



              EDIT: The OpenBSD code I linked above looks to be a fallback implementation for ISAs that don't have there own asm implementation. There are different implementations of strlen depending on architecture. The code for amd64 strlen, for example, is asm. Similar to PeterCordes' comments/answer pointing out that the non-fallback GNU implementations are asm as well.







              share|improve this answer










              New contributor



              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.








              share|improve this answer



              share|improve this answer








              edited 8 hours ago









              Jean-François Fabre

              111k10 gold badges71 silver badges125 bronze badges




              111k10 gold badges71 silver badges125 bronze badges






              New contributor



              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.








              answered yesterday









              PeschkePeschke

              3932 silver badges6 bronze badges




              3932 silver badges6 bronze badges




              New contributor



              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.




              New contributor




              Peschke is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
              Check out our Code of Conduct.












              • 4





                That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

                – Jason
                yesterday






              • 10





                It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

                – Peter Cordes
                yesterday






              • 4





                Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

                – Antti Haapala
                yesterday






              • 1





                @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

                – Peter Cordes
                yesterday






              • 2





                @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

                – dchest
                14 hours ago













              • 4





                That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

                – Jason
                yesterday






              • 10





                It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

                – Peter Cordes
                yesterday






              • 4





                Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

                – Antti Haapala
                yesterday






              • 1





                @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

                – Peter Cordes
                yesterday






              • 2





                @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

                – dchest
                14 hours ago








              4




              4





              That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

              – Jason
              yesterday





              That makes a very nice illustration of the different values being optimized in OpenBSD vs GNU tools.

              – Jason
              yesterday




              10




              10





              It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

              – Peter Cordes
              yesterday





              It's glibc's portable fallback implementation. All the major ISAs have hand-written asm implementations in glibc, using SIMD when it helps (e.g. on x86). See code.woboq.org/userspace/glibc/sysdeps/x86_64/multiarch/… and code.woboq.org/userspace/glibc/sysdeps/aarch64/multiarch/…

              – Peter Cordes
              yesterday




              4




              4





              Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

              – Antti Haapala
              yesterday





              Even the OpenBSD version has a flaw that the original avoids! The behaviour of s - str is undefined if the result is not representable in ptrdiff_t.

              – Antti Haapala
              yesterday




              1




              1





              @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

              – Peter Cordes
              yesterday





              @AnttiHaapala: In GNU C, the max object size is PTRDIFF_MAX. But it's still possible to mmap more memory than that on Linux at least (e.g. in a 32-bit process under an x86-64 kernel I could mmap about 2.7GB contiguous before I started getting failures). IDK about OpenBSD; the kernel could make it impossible to reach that return without segfaulting or stopping within the size. But yes, you'd think defensive coding that avoids the theoretical C UB would be something OpenBSD would want to do. Even though strlen can't inline and real compilers will just compile it to a subtract.

              – Peter Cordes
              yesterday




              2




              2





              @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

              – dchest
              14 hours ago






              @PeterCordes exactly. Same thing in OpenBSD, e.g. i386 assembly: cvsweb.openbsd.org/cgi-bin/cvsweb/src/lib/libc/arch/i386/string/…

              – dchest
              14 hours ago












              26















              You want code to be correct, maintainable, and fast. These factors have different importance:



              "correct" is absolutely essential.



              "maintainable" depends on how much you are going to maintain the code: strlen has been a Standard C library function for over 40 years. It's not going to change. Maintainability is therefore quite unimportant - for this function.



              "Fast": In many applications, strcpy, strlen etc. use a significant amount of the execution time. To achieve the same overall speed gain as this complicated, but not very complicated implementation of strlen by improving the compiler would take heroic efforts.



              Being fast has another advantage: When programmers find out that calling "strlen" is the fastest method they can measure the number of bytes in a string, they are not tempted anymore to write their own code to make things faster.



              So for strlen, speed is much more important, and maintainability much less important, than for most code that you will ever write.



              Why must it be so complicated? Say you have a 1,000 byte string. The simple implementation will examine 1,000 bytes. A current implementation would likely examine 64 bit words at a time, which means 125 64-bit or eight-byte words. It might even use vector instructions examining say 32 bytes at a time, which would be even more complicated and even faster. Using vector instructions leads to code that is a bit more complicated but quite straightforward, checking whether one of eight bytes in a 64 bit word is zero requires some clever tricks. So for medium to long strings this code can be expected to be about four times faster. For a function as important as strlen, that's worth writing a more complex function.



              PS. The code is not very portable. But it's part of the Standard C library, which is part of the implementation - it need not be portable.



              PPS. Someone posted an example where a debugging tool complained about accessing bytes past the end of a string. An implementation can be designed that guarantees the following: If p is a valid pointer to a byte, then any access to a byte in the same aligned block that would be undefined behaviour according to the C standard, will return an unspecified value.



              PPPS. Intel has added instructions to their later processors that form a building block for the strstr() function (finding a substring in a string). Their description is mind boggling, but they can make that particular function probably 100 times faster. (Basically, given an array a containing "Hello, world!" and an array b starting with 16 bytes "HelloHelloHelloH" and containing more bytes, it figures out that the string a doesn't occur in b earlier than starting at index 15).






              share|improve this answer



























              • Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

                – Baldrickk
                15 hours ago











              • Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

                – gnasher729
                8 hours ago















              26















              You want code to be correct, maintainable, and fast. These factors have different importance:



              "correct" is absolutely essential.



              "maintainable" depends on how much you are going to maintain the code: strlen has been a Standard C library function for over 40 years. It's not going to change. Maintainability is therefore quite unimportant - for this function.



              "Fast": In many applications, strcpy, strlen etc. use a significant amount of the execution time. To achieve the same overall speed gain as this complicated, but not very complicated implementation of strlen by improving the compiler would take heroic efforts.



              Being fast has another advantage: When programmers find out that calling "strlen" is the fastest method they can measure the number of bytes in a string, they are not tempted anymore to write their own code to make things faster.



              So for strlen, speed is much more important, and maintainability much less important, than for most code that you will ever write.



              Why must it be so complicated? Say you have a 1,000 byte string. The simple implementation will examine 1,000 bytes. A current implementation would likely examine 64 bit words at a time, which means 125 64-bit or eight-byte words. It might even use vector instructions examining say 32 bytes at a time, which would be even more complicated and even faster. Using vector instructions leads to code that is a bit more complicated but quite straightforward, checking whether one of eight bytes in a 64 bit word is zero requires some clever tricks. So for medium to long strings this code can be expected to be about four times faster. For a function as important as strlen, that's worth writing a more complex function.



              PS. The code is not very portable. But it's part of the Standard C library, which is part of the implementation - it need not be portable.



              PPS. Someone posted an example where a debugging tool complained about accessing bytes past the end of a string. An implementation can be designed that guarantees the following: If p is a valid pointer to a byte, then any access to a byte in the same aligned block that would be undefined behaviour according to the C standard, will return an unspecified value.



              PPPS. Intel has added instructions to their later processors that form a building block for the strstr() function (finding a substring in a string). Their description is mind boggling, but they can make that particular function probably 100 times faster. (Basically, given an array a containing "Hello, world!" and an array b starting with 16 bytes "HelloHelloHelloH" and containing more bytes, it figures out that the string a doesn't occur in b earlier than starting at index 15).






              share|improve this answer



























              • Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

                – Baldrickk
                15 hours ago











              • Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

                – gnasher729
                8 hours ago













              26














              26










              26









              You want code to be correct, maintainable, and fast. These factors have different importance:



              "correct" is absolutely essential.



              "maintainable" depends on how much you are going to maintain the code: strlen has been a Standard C library function for over 40 years. It's not going to change. Maintainability is therefore quite unimportant - for this function.



              "Fast": In many applications, strcpy, strlen etc. use a significant amount of the execution time. To achieve the same overall speed gain as this complicated, but not very complicated implementation of strlen by improving the compiler would take heroic efforts.



              Being fast has another advantage: When programmers find out that calling "strlen" is the fastest method they can measure the number of bytes in a string, they are not tempted anymore to write their own code to make things faster.



              So for strlen, speed is much more important, and maintainability much less important, than for most code that you will ever write.



              Why must it be so complicated? Say you have a 1,000 byte string. The simple implementation will examine 1,000 bytes. A current implementation would likely examine 64 bit words at a time, which means 125 64-bit or eight-byte words. It might even use vector instructions examining say 32 bytes at a time, which would be even more complicated and even faster. Using vector instructions leads to code that is a bit more complicated but quite straightforward, checking whether one of eight bytes in a 64 bit word is zero requires some clever tricks. So for medium to long strings this code can be expected to be about four times faster. For a function as important as strlen, that's worth writing a more complex function.



              PS. The code is not very portable. But it's part of the Standard C library, which is part of the implementation - it need not be portable.



              PPS. Someone posted an example where a debugging tool complained about accessing bytes past the end of a string. An implementation can be designed that guarantees the following: If p is a valid pointer to a byte, then any access to a byte in the same aligned block that would be undefined behaviour according to the C standard, will return an unspecified value.



              PPPS. Intel has added instructions to their later processors that form a building block for the strstr() function (finding a substring in a string). Their description is mind boggling, but they can make that particular function probably 100 times faster. (Basically, given an array a containing "Hello, world!" and an array b starting with 16 bytes "HelloHelloHelloH" and containing more bytes, it figures out that the string a doesn't occur in b earlier than starting at index 15).






              share|improve this answer















              You want code to be correct, maintainable, and fast. These factors have different importance:



              "correct" is absolutely essential.



              "maintainable" depends on how much you are going to maintain the code: strlen has been a Standard C library function for over 40 years. It's not going to change. Maintainability is therefore quite unimportant - for this function.



              "Fast": In many applications, strcpy, strlen etc. use a significant amount of the execution time. To achieve the same overall speed gain as this complicated, but not very complicated implementation of strlen by improving the compiler would take heroic efforts.



              Being fast has another advantage: When programmers find out that calling "strlen" is the fastest method they can measure the number of bytes in a string, they are not tempted anymore to write their own code to make things faster.



              So for strlen, speed is much more important, and maintainability much less important, than for most code that you will ever write.



              Why must it be so complicated? Say you have a 1,000 byte string. The simple implementation will examine 1,000 bytes. A current implementation would likely examine 64 bit words at a time, which means 125 64-bit or eight-byte words. It might even use vector instructions examining say 32 bytes at a time, which would be even more complicated and even faster. Using vector instructions leads to code that is a bit more complicated but quite straightforward, checking whether one of eight bytes in a 64 bit word is zero requires some clever tricks. So for medium to long strings this code can be expected to be about four times faster. For a function as important as strlen, that's worth writing a more complex function.



              PS. The code is not very portable. But it's part of the Standard C library, which is part of the implementation - it need not be portable.



              PPS. Someone posted an example where a debugging tool complained about accessing bytes past the end of a string. An implementation can be designed that guarantees the following: If p is a valid pointer to a byte, then any access to a byte in the same aligned block that would be undefined behaviour according to the C standard, will return an unspecified value.



              PPPS. Intel has added instructions to their later processors that form a building block for the strstr() function (finding a substring in a string). Their description is mind boggling, but they can make that particular function probably 100 times faster. (Basically, given an array a containing "Hello, world!" and an array b starting with 16 bytes "HelloHelloHelloH" and containing more bytes, it figures out that the string a doesn't occur in b earlier than starting at index 15).







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited yesterday

























              answered yesterday









              gnasher729gnasher729

              43.4k4 gold badges53 silver badges83 bronze badges




              43.4k4 gold badges53 silver badges83 bronze badges















              • Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

                – Baldrickk
                15 hours ago











              • Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

                – gnasher729
                8 hours ago

















              • Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

                – Baldrickk
                15 hours ago











              • Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

                – gnasher729
                8 hours ago
















              Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

              – Baldrickk
              15 hours ago





              Or... If I'm finding that I'm doing a lot of string based processing and there is a bottleneck, I'm probably going to implement my own version of Pascal Strings instead of improving strlen...

              – Baldrickk
              15 hours ago













              Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

              – gnasher729
              8 hours ago





              Nobody asks you to improve strlen. But making it good enough avoids nonsense like people implementing their own strings.

              – gnasher729
              8 hours ago











              23















              In short, this is a performance optimization the standard library can do by knowing what compiler it is compiled with - you shouldn't write code like this, unless you are writing a standard library and can depend on a specific compiler. Specifically, it's processing alignment number of bytes at the same time - 4 on 32-bit platforms, 8 on 64-bit platforms. This means it can be 4 or 8 times faster than naïve byte iteration.



              To explain how does this work, consider the following image. Assume the 32-bit platform here (4 bytes alignment).





              Let's say that the letter "H" of "Hello, world!" string was provided as an argument for strlen. Because the CPU likes having things aligned in memory (ideally, address % sizeof(size_t) == 0), the bytes before the alignment are processed byte-by-byte, using slow method.



              Then, for each alignment-sized chunk, by calculating (longbits - 0x01010101) & 0x80808080 != 0 it checks whether any of the bytes within an integer is zero. This calculation has a false positive when at least one of bytes is higher than 0x80, but more often than not it should work. If that's not the case (as it is in yellow area), the length is increased by alignment size.



              If any of bytes within an integer turns out to be zero (or 0x81), then the string is checked byte-by-byte to determine the position of zero.



              This can make an out-of-bounds access, however because it's within an alignment, it's more likely than not to be fine, memory mapping units usually don't have byte level precision.






              share|improve this answer






















              • 17





                Upvoted for excellent graphical representation alone :D

                – Antti Haapala
                18 hours ago











              • This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

                – Peter Cordes
                15 hours ago











              • size_t is not guaranteed to be aligned.

                – JL2210
                5 hours ago















              23















              In short, this is a performance optimization the standard library can do by knowing what compiler it is compiled with - you shouldn't write code like this, unless you are writing a standard library and can depend on a specific compiler. Specifically, it's processing alignment number of bytes at the same time - 4 on 32-bit platforms, 8 on 64-bit platforms. This means it can be 4 or 8 times faster than naïve byte iteration.



              To explain how does this work, consider the following image. Assume the 32-bit platform here (4 bytes alignment).





              Let's say that the letter "H" of "Hello, world!" string was provided as an argument for strlen. Because the CPU likes having things aligned in memory (ideally, address % sizeof(size_t) == 0), the bytes before the alignment are processed byte-by-byte, using slow method.



              Then, for each alignment-sized chunk, by calculating (longbits - 0x01010101) & 0x80808080 != 0 it checks whether any of the bytes within an integer is zero. This calculation has a false positive when at least one of bytes is higher than 0x80, but more often than not it should work. If that's not the case (as it is in yellow area), the length is increased by alignment size.



              If any of bytes within an integer turns out to be zero (or 0x81), then the string is checked byte-by-byte to determine the position of zero.



              This can make an out-of-bounds access, however because it's within an alignment, it's more likely than not to be fine, memory mapping units usually don't have byte level precision.






              share|improve this answer






















              • 17





                Upvoted for excellent graphical representation alone :D

                – Antti Haapala
                18 hours ago











              • This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

                – Peter Cordes
                15 hours ago











              • size_t is not guaranteed to be aligned.

                – JL2210
                5 hours ago













              23














              23










              23









              In short, this is a performance optimization the standard library can do by knowing what compiler it is compiled with - you shouldn't write code like this, unless you are writing a standard library and can depend on a specific compiler. Specifically, it's processing alignment number of bytes at the same time - 4 on 32-bit platforms, 8 on 64-bit platforms. This means it can be 4 or 8 times faster than naïve byte iteration.



              To explain how does this work, consider the following image. Assume the 32-bit platform here (4 bytes alignment).





              Let's say that the letter "H" of "Hello, world!" string was provided as an argument for strlen. Because the CPU likes having things aligned in memory (ideally, address % sizeof(size_t) == 0), the bytes before the alignment are processed byte-by-byte, using slow method.



              Then, for each alignment-sized chunk, by calculating (longbits - 0x01010101) & 0x80808080 != 0 it checks whether any of the bytes within an integer is zero. This calculation has a false positive when at least one of bytes is higher than 0x80, but more often than not it should work. If that's not the case (as it is in yellow area), the length is increased by alignment size.



              If any of bytes within an integer turns out to be zero (or 0x81), then the string is checked byte-by-byte to determine the position of zero.



              This can make an out-of-bounds access, however because it's within an alignment, it's more likely than not to be fine, memory mapping units usually don't have byte level precision.






              share|improve this answer















              In short, this is a performance optimization the standard library can do by knowing what compiler it is compiled with - you shouldn't write code like this, unless you are writing a standard library and can depend on a specific compiler. Specifically, it's processing alignment number of bytes at the same time - 4 on 32-bit platforms, 8 on 64-bit platforms. This means it can be 4 or 8 times faster than naïve byte iteration.



              To explain how does this work, consider the following image. Assume the 32-bit platform here (4 bytes alignment).





              Let's say that the letter "H" of "Hello, world!" string was provided as an argument for strlen. Because the CPU likes having things aligned in memory (ideally, address % sizeof(size_t) == 0), the bytes before the alignment are processed byte-by-byte, using slow method.



              Then, for each alignment-sized chunk, by calculating (longbits - 0x01010101) & 0x80808080 != 0 it checks whether any of the bytes within an integer is zero. This calculation has a false positive when at least one of bytes is higher than 0x80, but more often than not it should work. If that's not the case (as it is in yellow area), the length is increased by alignment size.



              If any of bytes within an integer turns out to be zero (or 0x81), then the string is checked byte-by-byte to determine the position of zero.



              This can make an out-of-bounds access, however because it's within an alignment, it's more likely than not to be fine, memory mapping units usually don't have byte level precision.







              share|improve this answer














              share|improve this answer



              share|improve this answer








              edited 19 hours ago

























              answered 19 hours ago









              Konrad BorowskiKonrad Borowski

              7,9742 gold badges43 silver badges64 bronze badges




              7,9742 gold badges43 silver badges64 bronze badges










              • 17





                Upvoted for excellent graphical representation alone :D

                – Antti Haapala
                18 hours ago











              • This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

                – Peter Cordes
                15 hours ago











              • size_t is not guaranteed to be aligned.

                – JL2210
                5 hours ago












              • 17





                Upvoted for excellent graphical representation alone :D

                – Antti Haapala
                18 hours ago











              • This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

                – Peter Cordes
                15 hours ago











              • size_t is not guaranteed to be aligned.

                – JL2210
                5 hours ago







              17




              17





              Upvoted for excellent graphical representation alone :D

              – Antti Haapala
              18 hours ago





              Upvoted for excellent graphical representation alone :D

              – Antti Haapala
              18 hours ago













              This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

              – Peter Cordes
              15 hours ago





              This implementation is part of glibc. The GNU system does memory protection with page granularity. So yes, an aligned load that includes any valid bytes is safe.

              – Peter Cordes
              15 hours ago













              size_t is not guaranteed to be aligned.

              – JL2210
              5 hours ago





              size_t is not guaranteed to be aligned.

              – JL2210
              5 hours ago











              20















              Briefly: checking a string byte by byte will potentially be slow on architectures that can fetch larger amounts of data at a time.



              If the check for null termination could be done on 32 or 64 bit basis, it reduces the amount of checks the compiler has to perform. That's what the linked code attempts to do, with a specific system in mind. They make assumptions about addressing, alignment, cache use, non-standard compiler setups etc etc.



              Reading byte by byte as in your example would be a sensible approach on a 8 bit CPU, or when writing a portable lib written in standard C.



              Looking at C standard libs for advise how to write fast/good code isn't a good idea, because it will be non-portable and rely on non-standard assumptions or poorly-defined behavior. If you are a beginner, reading such code will likely be more harmful than educational.






              share|improve this answer




















              • 1





                Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

                – russbishop
                yesterday






              • 3





                @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

                – Peter Cordes
                yesterday
















              20















              Briefly: checking a string byte by byte will potentially be slow on architectures that can fetch larger amounts of data at a time.



              If the check for null termination could be done on 32 or 64 bit basis, it reduces the amount of checks the compiler has to perform. That's what the linked code attempts to do, with a specific system in mind. They make assumptions about addressing, alignment, cache use, non-standard compiler setups etc etc.



              Reading byte by byte as in your example would be a sensible approach on a 8 bit CPU, or when writing a portable lib written in standard C.



              Looking at C standard libs for advise how to write fast/good code isn't a good idea, because it will be non-portable and rely on non-standard assumptions or poorly-defined behavior. If you are a beginner, reading such code will likely be more harmful than educational.






              share|improve this answer




















              • 1





                Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

                – russbishop
                yesterday






              • 3





                @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

                – Peter Cordes
                yesterday














              20














              20










              20









              Briefly: checking a string byte by byte will potentially be slow on architectures that can fetch larger amounts of data at a time.



              If the check for null termination could be done on 32 or 64 bit basis, it reduces the amount of checks the compiler has to perform. That's what the linked code attempts to do, with a specific system in mind. They make assumptions about addressing, alignment, cache use, non-standard compiler setups etc etc.



              Reading byte by byte as in your example would be a sensible approach on a 8 bit CPU, or when writing a portable lib written in standard C.



              Looking at C standard libs for advise how to write fast/good code isn't a good idea, because it will be non-portable and rely on non-standard assumptions or poorly-defined behavior. If you are a beginner, reading such code will likely be more harmful than educational.






              share|improve this answer













              Briefly: checking a string byte by byte will potentially be slow on architectures that can fetch larger amounts of data at a time.



              If the check for null termination could be done on 32 or 64 bit basis, it reduces the amount of checks the compiler has to perform. That's what the linked code attempts to do, with a specific system in mind. They make assumptions about addressing, alignment, cache use, non-standard compiler setups etc etc.



              Reading byte by byte as in your example would be a sensible approach on a 8 bit CPU, or when writing a portable lib written in standard C.



              Looking at C standard libs for advise how to write fast/good code isn't a good idea, because it will be non-portable and rely on non-standard assumptions or poorly-defined behavior. If you are a beginner, reading such code will likely be more harmful than educational.







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered yesterday









              LundinLundin

              119k17 gold badges171 silver badges282 bronze badges




              119k17 gold badges171 silver badges282 bronze badges










              • 1





                Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

                – russbishop
                yesterday






              • 3





                @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

                – Peter Cordes
                yesterday













              • 1





                Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

                – russbishop
                yesterday






              • 3





                @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

                – Peter Cordes
                yesterday








              1




              1





              Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

              – russbishop
              yesterday





              Of course the optimizer is highly likely to unroll or auto-vectorize this loop, and the pre-fetcher can trivially detect this access pattern. Whether these tricks actually matter on modern processors would need to be tested. If there is a win to be had it is probably using vector instructions.

              – russbishop
              yesterday




              3




              3





              @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

              – Peter Cordes
              yesterday






              @russbishop: You'd hope so, but no. GCC and clang are completely incapable of auto-vectorizing loops where the iteration count isn't known ahead of the first iteration. That includes search loops, or any other loop with a data-dependent if()break. ICC can auto-vectorize such loops, but IDK how well it does with a naive strlen. And yes, SSE2 pcmpeqb / pmovmskb is very good for strlen, testing 16 bytes at a time. code.woboq.org/userspace/glibc/sysdeps/x86_64/strlen.S.html is glibc's SSE2 version. See also this Q&A.

              – Peter Cordes
              yesterday












              0















              One important thing not mentioned by the other answers is that the FSF is very cautious about ensuring that proprietary code does not make it into GNU projects. In the GNU Coding Standards under Referring to Proprietary Programs, there is a warning about organising your implementation in a way that it cannot be confused with existing proprietary code:




              Don’t in any circumstances refer to Unix source code for or during your work on GNU! (Or to any other proprietary programs.)



              If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results.



              For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different.




              (Emphasis mine.)






              share|improve this answer

























              • How does this answer the question?

                – JL2210
                5 hours ago















              0















              One important thing not mentioned by the other answers is that the FSF is very cautious about ensuring that proprietary code does not make it into GNU projects. In the GNU Coding Standards under Referring to Proprietary Programs, there is a warning about organising your implementation in a way that it cannot be confused with existing proprietary code:




              Don’t in any circumstances refer to Unix source code for or during your work on GNU! (Or to any other proprietary programs.)



              If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results.



              For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different.




              (Emphasis mine.)






              share|improve this answer

























              • How does this answer the question?

                – JL2210
                5 hours ago













              0














              0










              0









              One important thing not mentioned by the other answers is that the FSF is very cautious about ensuring that proprietary code does not make it into GNU projects. In the GNU Coding Standards under Referring to Proprietary Programs, there is a warning about organising your implementation in a way that it cannot be confused with existing proprietary code:




              Don’t in any circumstances refer to Unix source code for or during your work on GNU! (Or to any other proprietary programs.)



              If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results.



              For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different.




              (Emphasis mine.)






              share|improve this answer













              One important thing not mentioned by the other answers is that the FSF is very cautious about ensuring that proprietary code does not make it into GNU projects. In the GNU Coding Standards under Referring to Proprietary Programs, there is a warning about organising your implementation in a way that it cannot be confused with existing proprietary code:




              Don’t in any circumstances refer to Unix source code for or during your work on GNU! (Or to any other proprietary programs.)



              If you have a vague recollection of the internals of a Unix program, this does not absolutely mean you can’t write an imitation of it, but do try to organize the imitation internally along different lines, because this is likely to make the details of the Unix version irrelevant and dissimilar to your results.



              For example, Unix utilities were generally optimized to minimize memory use; if you go for speed instead, your program will be very different.




              (Emphasis mine.)







              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered 5 hours ago









              Jack KellyJack Kelly

              15.4k1 gold badge49 silver badges76 bronze badges




              15.4k1 gold badge49 silver badges76 bronze badges















              • How does this answer the question?

                – JL2210
                5 hours ago

















              • How does this answer the question?

                – JL2210
                5 hours ago
















              How does this answer the question?

              – JL2210
              5 hours ago





              How does this answer the question?

              – JL2210
              5 hours ago











              0















              Yes, strlen requires optimization. It's a function that is called a lot, and if it is slow (as your implementation is), your program could run a full 2-3 seconds slower.



              The optimized C code (bitwise AND-ing with some magic values) is just part of the portable fallback implementation, and is often replaced with machine-specific assembly code from the sysdeps directory. These examples are often a lot faster (running in a few milliseconds or less).



              Your (naive) code iterates on every single character of the string, and jumps around every time the character is not null. This can be very slow for large inputs (upwards of ten seconds) and as such is not very good for program speed. However, the optimized code divides the amount of computations by the size of long on your platform (usually 8 or 4); as such, it is much faster than the naive C implementation.






              share|improve this answer































                0















                Yes, strlen requires optimization. It's a function that is called a lot, and if it is slow (as your implementation is), your program could run a full 2-3 seconds slower.



                The optimized C code (bitwise AND-ing with some magic values) is just part of the portable fallback implementation, and is often replaced with machine-specific assembly code from the sysdeps directory. These examples are often a lot faster (running in a few milliseconds or less).



                Your (naive) code iterates on every single character of the string, and jumps around every time the character is not null. This can be very slow for large inputs (upwards of ten seconds) and as such is not very good for program speed. However, the optimized code divides the amount of computations by the size of long on your platform (usually 8 or 4); as such, it is much faster than the naive C implementation.






                share|improve this answer





























                  0














                  0










                  0









                  Yes, strlen requires optimization. It's a function that is called a lot, and if it is slow (as your implementation is), your program could run a full 2-3 seconds slower.



                  The optimized C code (bitwise AND-ing with some magic values) is just part of the portable fallback implementation, and is often replaced with machine-specific assembly code from the sysdeps directory. These examples are often a lot faster (running in a few milliseconds or less).



                  Your (naive) code iterates on every single character of the string, and jumps around every time the character is not null. This can be very slow for large inputs (upwards of ten seconds) and as such is not very good for program speed. However, the optimized code divides the amount of computations by the size of long on your platform (usually 8 or 4); as such, it is much faster than the naive C implementation.






                  share|improve this answer















                  Yes, strlen requires optimization. It's a function that is called a lot, and if it is slow (as your implementation is), your program could run a full 2-3 seconds slower.



                  The optimized C code (bitwise AND-ing with some magic values) is just part of the portable fallback implementation, and is often replaced with machine-specific assembly code from the sysdeps directory. These examples are often a lot faster (running in a few milliseconds or less).



                  Your (naive) code iterates on every single character of the string, and jumps around every time the character is not null. This can be very slow for large inputs (upwards of ten seconds) and as such is not very good for program speed. However, the optimized code divides the amount of computations by the size of long on your platform (usually 8 or 4); as such, it is much faster than the naive C implementation.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited 5 hours ago

























                  answered 5 hours ago









                  JL2210JL2210

                  3,2753 gold badges12 silver badges38 bronze badges




                  3,2753 gold badges12 silver badges38 bronze badges























                      Shared is a new contributor. Be nice, and check out our Code of Conduct.









                      draft saved

                      draft discarded


















                      Shared is a new contributor. Be nice, and check out our Code of Conduct.












                      Shared is a new contributor. Be nice, and check out our Code of Conduct.











                      Shared is a new contributor. Be nice, and check out our Code of Conduct.














                      Thanks for contributing an answer to Stack Overflow!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid


                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.

                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function ()
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f57650895%2fwhy-does-glibcs-strlen-need-to-be-so-complicated-to-run-fast%23new-answer', 'question_page');

                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      19. јануар Садржај Догађаји Рођења Смрти Празници и дани сећања Види још Референце Мени за навигацијуу

                      Israel Cuprins Etimologie | Istorie | Geografie | Politică | Demografie | Educație | Economie | Cultură | Note explicative | Note bibliografice | Bibliografie | Legături externe | Meniu de navigaresite web oficialfacebooktweeterGoogle+Instagramcanal YouTubeInstagramtextmodificaremodificarewww.technion.ac.ilnew.huji.ac.ilwww.weizmann.ac.ilwww1.biu.ac.ilenglish.tau.ac.ilwww.haifa.ac.ilin.bgu.ac.ilwww.openu.ac.ilwww.ariel.ac.ilCIA FactbookHarta Israelului"Negotiating Jerusalem," Palestine–Israel JournalThe Schizoid Nature of Modern Hebrew: A Slavic Language in Search of a Semitic Past„Arabic in Israel: an official language and a cultural bridge”„Latest Population Statistics for Israel”„Israel Population”„Tables”„Report for Selected Countries and Subjects”Human Development Report 2016: Human Development for Everyone„Distribution of family income - Gini index”The World FactbookJerusalem Law„Israel”„Israel”„Zionist Leaders: David Ben-Gurion 1886–1973”„The status of Jerusalem”„Analysis: Kadima's big plans”„Israel's Hard-Learned Lessons”„The Legacy of Undefined Borders, Tel Aviv Notes No. 40, 5 iunie 2002”„Israel Journal: A Land Without Borders”„Population”„Israel closes decade with population of 7.5 million”Time Series-DataBank„Selected Statistics on Jerusalem Day 2007 (Hebrew)”Golan belongs to Syria, Druze protestGlobal Survey 2006: Middle East Progress Amid Global Gains in FreedomWHO: Life expectancy in Israel among highest in the worldInternational Monetary Fund, World Economic Outlook Database, April 2011: Nominal GDP list of countries. Data for the year 2010.„Israel's accession to the OECD”Popular Opinion„On the Move”Hosea 12:5„Walking the Bible Timeline”„Palestine: History”„Return to Zion”An invention called 'the Jewish people' – Haaretz – Israel NewsoriginalJewish and Non-Jewish Population of Palestine-Israel (1517–2004)ImmigrationJewishvirtuallibrary.orgChapter One: The Heralders of Zionism„The birth of modern Israel: A scrap of paper that changed history”„League of Nations: The Mandate for Palestine, 24 iulie 1922”The Population of Palestine Prior to 1948originalBackground Paper No. 47 (ST/DPI/SER.A/47)History: Foreign DominationTwo Hundred and Seventh Plenary Meeting„Israel (Labor Zionism)”Population, by Religion and Population GroupThe Suez CrisisAdolf EichmannJustice Ministry Reply to Amnesty International Report„The Interregnum”Israel Ministry of Foreign Affairs – The Palestinian National Covenant- July 1968Research on terrorism: trends, achievements & failuresThe Routledge Atlas of the Arab–Israeli conflict: The Complete History of the Struggle and the Efforts to Resolve It"George Habash, Palestinian Terrorism Tactician, Dies at 82."„1973: Arab states attack Israeli forces”Agranat Commission„Has Israel Annexed East Jerusalem?”original„After 4 Years, Intifada Still Smolders”From the End of the Cold War to 2001originalThe Oslo Accords, 1993Israel-PLO Recognition – Exchange of Letters between PM Rabin and Chairman Arafat – Sept 9- 1993Foundation for Middle East PeaceSources of Population Growth: Total Israeli Population and Settler Population, 1991–2003original„Israel marks Rabin assassination”The Wye River Memorandumoriginal„West Bank barrier route disputed, Israeli missile kills 2”"Permanent Ceasefire to Be Based on Creation Of Buffer Zone Free of Armed Personnel Other than UN, Lebanese Forces"„Hezbollah kills 8 soldiers, kidnaps two in offensive on northern border”„Olmert confirms peace talks with Syria”„Battleground Gaza: Israeli ground forces invade the strip”„IDF begins Gaza troop withdrawal, hours after ending 3-week offensive”„THE LAND: Geography and Climate”„Area of districts, sub-districts, natural regions and lakes”„Israel - Geography”„Makhteshim Country”Israel and the Palestinian Territories„Makhtesh Ramon”„The Living Dead Sea”„Temperatures reach record high in Pakistan”„Climate Extremes In Israel”Israel in figures„Deuteronom”„JNF: 240 million trees planted since 1901”„Vegetation of Israel and Neighboring Countries”Environmental Law in Israel„Executive branch”„Israel's election process explained”„The Electoral System in Israel”„Constitution for Israel”„All 120 incoming Knesset members”„Statul ISRAEL”„The Judiciary: The Court System”„Israel's high court unique in region”„Israel and the International Criminal Court: A Legal Battlefield”„Localities and population, by population group, district, sub-district and natural region”„Israel: Districts, Major Cities, Urban Localities & Metropolitan Areas”„Israel-Egypt Relations: Background & Overview of Peace Treaty”„Solana to Haaretz: New Rules of War Needed for Age of Terror”„Israel's Announcement Regarding Settlements”„United Nations Security Council Resolution 497”„Security Council resolution 478 (1980) on the status of Jerusalem”„Arabs will ask U.N. to seek razing of Israeli wall”„Olmert: Willing to trade land for peace”„Mapping Peace between Syria and Israel”„Egypt: Israel must accept the land-for-peace formula”„Israel: Age structure from 2005 to 2015”„Global, regional, and national disability-adjusted life years (DALYs) for 306 diseases and injuries and healthy life expectancy (HALE) for 188 countries, 1990–2013: quantifying the epidemiological transition”10.1016/S0140-6736(15)61340-X„World Health Statistics 2014”„Life expectancy for Israeli men world's 4th highest”„Family Structure and Well-Being Across Israel's Diverse Population”„Fertility among Jewish and Muslim Women in Israel, by Level of Religiosity, 1979-2009”„Israel leaders in birth rate, but poverty major challenge”„Ethnic Groups”„Israel's population: Over 8.5 million”„Israel - Ethnic groups”„Jews, by country of origin and age”„Minority Communities in Israel: Background & Overview”„Israel”„Language in Israel”„Selected Data from the 2011 Social Survey on Mastery of the Hebrew Language and Usage of Languages”„Religions”„5 facts about Israeli Druze, a unique religious and ethnic group”„Israël”Israel Country Study Guide„Haredi city in Negev – blessing or curse?”„New town Harish harbors hopes of being more than another Pleasantville”„List of localities, in alphabetical order”„Muncitorii români, doriți în Israel”„Prietenia româno-israeliană la nevoie se cunoaște”„The Higher Education System in Israel”„Middle East”„Academic Ranking of World Universities 2016”„Israel”„Israel”„Jewish Nobel Prize Winners”„All Nobel Prizes in Literature”„All Nobel Peace Prizes”„All Prizes in Economic Sciences”„All Nobel Prizes in Chemistry”„List of Fields Medallists”„Sakharov Prize”„Țara care și-a sfidat "destinul" și se bate umăr la umăr cu Silicon Valley”„Apple's R&D center in Israel grew to about 800 employees”„Tim Cook: Apple's Herzliya R&D center second-largest in world”„Lecții de economie de la Israel”„Land use”Israel Investment and Business GuideA Country Study: IsraelCentral Bureau of StatisticsFlorin Diaconu, „Kadima: Flexibilitate și pragmatism, dar nici un compromis în chestiuni vitale", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 71-72Florin Diaconu, „Likud: Dreapta israeliană constant opusă retrocedării teritoriilor cureite prin luptă în 1967", în Revista Institutului Diplomatic Român, anul I, numărul I, semestrul I, 2006, pp. 73-74MassadaIsraelul a crescut in 50 de ani cât alte state intr-un mileniuIsrael Government PortalIsraelIsraelIsraelmmmmmXX451232cb118646298(data)4027808-634110000 0004 0372 0767n7900328503691455-bb46-37e3-91d2-cb064a35ffcc1003570400564274ge1294033523775214929302638955X146498911146498911

                      Черчино Становништво Референце Спољашње везе Мени за навигацију46°09′29″ СГШ; 9°30′29″ ИГД / 46.15809° СГШ; 9.50814° ИГД / 46.15809; 9.5081446°09′29″ СГШ; 9°30′29″ ИГД / 46.15809° СГШ; 9.50814° ИГД / 46.15809; 9.508143179111„The GeoNames geographical database”„Istituto Nazionale di Statistica”Званични веб-сајтпроширитиуу