JavaScript C Style Type Cast From Signed To Unsigned
convert signed byte to unsigned byte, javascript:
-5 & 0xff // = 251 , signed to unsigned byte
251 <<24 >>24 // = -5 , unsinged byte to signed
the first makes all first bits 0 except the 1st byte
the second found on
https://blog.vjeux.com/2013/javascript/conversion-from-uint8-to-int8-x-24.html
basically, in a number, there are 4 bytes. in a positive number, the 3 first bytes are 0 and all 0 bits are zero. in a negative number, the 3 first bytes are 1 and all 0 bits are one and all 1 bits are zero. In bytes the most significant bit of 4th byte is used for sign;
shifting bits to left makes the 1st bit 1 of 1st byte to be as the 1st bit of 4th byte, then shifting again to the right, drags the most significant bit across. so if the sign bit is 1. it makes many 1 bits in front 3 bytes. so it is a side effect of shifting but it works.
for example: like you start from
?1111111 the first bit is a sign bit - ?,
but in a larger variable, so it is:
00000000_00000000_00000000_?1111111
shift to left
?1111111_........_........_........
shift to right
????????_????????_????????_?1111111
this is the effect,
it drags the edge bit across the shift
you can also use
(new Uint32Array([arg1]))[0]
e.g.
< (new Uint32Array([-1]))[0]
> 4294967295
Explanation: JavaScript does not follow traditional machine architecture integer casting conventions like C does, preferring type simplicity and portability over low level efficiency. However the typed-arrays (Uint8Array et al) in JavaScript were added specifically for the purpose of efficient and well defined multi byte and bit-level operations. Thus we can exploit this fact to access well defined and built-in bit casting operations. The syntax in the example above:
- Creates a natural-number array of the input
- Constructs a typed array (Uint32Array) from that number. This is where the cast will occur.
- Extracts the first (0-th) element of that typed array, which contains the cast result.
You can try a = arg1>>>0
, but I'm not sure it will do what you are looking for.
See this question for more details.
All (primitive) numbers in Javascript are IEEE748 doubles, giving you 52 bits of integer precision.
The problem with signed vs unsigned is that all of the Javascript bitwise operators apart from >>>
convert the numbers into a 32-bit signed number - that is, they take the least significant 32 bits and throw away the rest, and then the resulting bit 31 is sign extended to give a signed result.
If you are starting with the four known byte values, you can get around the problem with the bitwise operators by using simple multiplications and additions instead, which use all 52 bits of integer precision, e.g.
var a = [ 1, 2, 3, 4]; // 0x01020304
var unsigned = a[0] * (1 << 24) + a[1] * (1 << 16) + a[2] * (1 << 8) + a[3]