How do I convert between big-endian and little-endian values in C++?
If you're using Visual C++ do the following: You include intrin.h and call the following functions:
For 16 bit numbers:
unsigned short _byteswap_ushort(unsigned short value);
For 32 bit numbers:
unsigned long _byteswap_ulong(unsigned long value);
For 64 bit numbers:
unsigned __int64 _byteswap_uint64(unsigned __int64 value);
8 bit numbers (chars) don't need to be converted.
Also these are only defined for unsigned values they work for signed integers as well.
For floats and doubles it's more difficult as with plain integers as these may or not may be in the host machines byte-order. You can get little-endian floats on big-endian machines and vice versa.
Other compilers have similar intrinsics as well.
In GCC for example you can directly call some builtins as documented here:
uint32_t __builtin_bswap32 (uint32_t x)
uint64_t __builtin_bswap64 (uint64_t x)
(no need to include something). Afaik bits.h declares the same function in a non gcc-centric way as well.
16 bit swap it's just a bit-rotate.
Calling the intrinsics instead of rolling your own gives you the best performance and code density btw..
Simply put:
#include <climits>
template <typename T>
T swap_endian(T u)
{
static_assert (CHAR_BIT == 8, "CHAR_BIT != 8");
union
{
T u;
unsigned char u8[sizeof(T)];
} source, dest;
source.u = u;
for (size_t k = 0; k < sizeof(T); k++)
dest.u8[k] = source.u8[sizeof(T) - k - 1];
return dest.u;
}
usage: swap_endian<uint32_t>(42)
.
From The Byte Order Fallacy by Rob Pike:
Let's say your data stream has a little-endian-encoded 32-bit integer. Here's how to extract it (assuming unsigned bytes):
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | ((unsigned)data[3]<<24);
If it's big-endian, here's how to extract it:
i = (data[3]<<0) | (data[2]<<8) | (data[1]<<16) | ((unsigned)data[0]<<24);
TL;DR: don't worry about your platform native order, all that counts is the byte order of the stream your are reading from, and you better hope it's well defined.
Note 1: It is expected that int
and unsigned int
be 32 bits here, types may require adjustment otherwise.
Note 2: The last byte must be explicitly cast to unsigned
before shifting, as by default it's promoted to int
, and a shift by 24 bits means manipulating the sign bit which is Undefined Behavior.