How do you pack one 32bit int Into 4, 8bit ints in glsl / webgl?
In general, if you want to pack the significant digits of a floating-point number in bytes, you have consecutively to extract 8 bits packages of the significant digits and store it in a byte.
Encode a floating point number in a predefined range
In order to pack a floating-point value in 4 * 8-bit buffers, the range of the source values must first be specified.
If you have defined a value range [minVal
, maxVal
], it has to be mapped to the range [0.0, 1.0]:
float mapVal = clamp((value-minVal)/(maxVal-minVal), 0.0, 1.0);
The function Encode
packs a floating point value in the range [0.0, 1.0] into a vec4
:
vec4 Encode( in float value )
{
value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}
The function Decode
extracts a floating point value in the range [0.0, 1.0] from a vec4
:
float Decode( in vec4 pack )
{
float value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return value * (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
}
The following functions packs and extracts an floating point value in and from the range [minVal
, maxVal
]:
vec4 EncodeRange( in float value, flaot minVal, maxVal )
{
value = clamp( (value-minVal) / (maxVal-minVal), 0.0, 1.0 );
value *= (256.0*256.0*256.0 - 1.0) / (256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0, encode.w ) + 1.0/512.0;
}
float DecodeRange( in vec4 pack, flaot minVal, maxVal )
{
value = dot( pack, 1.0 / vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
value *= (256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0);
return mix( minVal, maxVal, value );
}
Encode a floating point number with an exponent
Another possibility is to encode the the significant digits to 3 * 8-bits of the RGB values and the exponent to the 8-bits of the alpha channel:
vec4 EncodeExp( in float value )
{
int exponent = int( log2( abs( value ) ) + 1.0 );
value /= exp2( float( exponent ) );
value = (value + 1.0) * (256.0*256.0*256.0 - 1.0) / (2.0*256.0*256.0*256.0);
vec4 encode = fract( value * vec4(1.0, 256.0, 256.0*256.0, 256.0*256.0*256.0) );
return vec4( encode.xyz - encode.yzw / 256.0 + 1.0/512.0, (float(exponent) + 127.5) / 256.0 );
}
float DecodeExp( in vec4 pack )
{
int exponent = int( pack.w * 256.0 - 127.0 );
float value = dot( pack.xyz, 1.0 / vec3(1.0, 256.0, 256.0*256.0) );
value = value * (2.0*256.0*256.0*256.0) / (256.0*256.0*256.0 - 1.0) - 1.0;
return value * exp2( float(exponent) );
}
Note, since a standard 32-bit IEEE 754 number has only 24 significant digits, it is completely sufficient to encode the number in 3 bytes.
See also How do I convert between float and vec4,vec3,vec2?
You can bitshift by multiplying/dividing by powers of two.
As pointed out in the comments the approach I originally posted was working but incorrect, here's one by Aras Pranckevičius, note that the source code in the post itself contains a typo and is HLSL, this is a GLSL port with the typo corrected:
const vec4 bitEnc = vec4(1.,255.,65025.,16581375.);
const vec4 bitDec = 1./bitEnc;
vec4 EncodeFloatRGBA (float v) {
vec4 enc = bitEnc * v;
enc = fract(enc);
enc -= enc.yzww * vec2(1./255., 0.).xxxy;
return enc;
}
float DecodeFloatRGBA (vec4 v) {
return dot(v, bitDec);
}