WebGL Read pixels from floating point render target
The readPixels is limited to the RGBA format and the UNSIGNED_BYTE type (WebGL specification). However there are some methods for "packing" floats into RGBA/UNSIGNED_BYTE described here:
http://concord-consortium.github.io/lab/experiments/webgl-gpgpu/webgl.html
Things have changed since WebGL shipped. Basically WebGL requires that you can call readPixels with format = RGBA and type = UNSIGNED_BYTE. Otherwise, the implementation is allowed one other implementation defined format/type combo per framebuffer attachment type.
You can query what that format/type combo is like this
gl.bindFramebuffer(gl.FRAMEBUFFER, someFramebuffer);
const format = gl.getParameter(gl.IMPLEMENTATION_COLOR_READ_FORMAT);
const type = gl.getParameter(gl.IMPLEMENTATION_COLOR_READ_TYPE);
Unfortunately, it's implementation defined. So for example, checking my personal devices at least one of them, my Nvidia Macbook Pro, reports RGBA/UNSIGNED_BYTE, in Chrome. The other browsers/devices report RGBA/FLOAT
WebGL2 requires being able to read as RGBA/FLOAT for floating point texture, if EXT_color_buffer_float
extension is enabled.
A workaround in WebGL1 could include writing to RGBA/UNSIGNED_BYTE as FLOAT. See this. You could either change your shader. Or you could add another pass that reads your floating point result texture and writes to a RGBA/UNSIGNED_BYTE texture, maybe 4x as large to get all RGBA values.
Unfortunately it still seems that reading out RGBA components as bytes is the only way for WebGL. If you need to encode a float into a pixel value you can use the following:
In your fractal shader (GLSL/HLSL):
float shift_right (float v, float amt) {
v = floor(v) + 0.5;
return floor(v / exp2(amt));
}
float shift_left (float v, float amt) {
return floor(v * exp2(amt) + 0.5);
}
float mask_last (float v, float bits) {
return mod(v, shift_left(1.0, bits));
}
float extract_bits (float num, float from, float to) {
from = floor(from + 0.5); to = floor(to + 0.5);
return mask_last(shift_right(num, from), to - from);
}
vec4 encode_float (float val) {
if (val == 0.0) return vec4(0, 0, 0, 0);
float sign = val > 0.0 ? 0.0 : 1.0;
val = abs(val);
float exponent = floor(log2(val));
float biased_exponent = exponent + 127.0;
float fraction = ((val / exp2(exponent)) - 1.0) * 8388608.0;
float t = biased_exponent / 2.0;
float last_bit_of_biased_exponent = fract(t) * 2.0;
float remaining_bits_of_biased_exponent = floor(t);
float byte4 = extract_bits(fraction, 0.0, 8.0) / 255.0;
float byte3 = extract_bits(fraction, 8.0, 16.0) / 255.0;
float byte2 = (last_bit_of_biased_exponent * 128.0 + extract_bits(fraction, 16.0, 23.0)) / 255.0;
float byte1 = (sign * 128.0 + remaining_bits_of_biased_exponent) / 255.0;
return vec4(byte4, byte3, byte2, byte1);
}
// (the following inside main(){}) return your float as the fragment color
float myFloat = 420.420;
gl_FragColor = encode_float(myFloat);
Then back on the JavaScript side, after your draw call has been made you can extract the encoded float value of each pixel with the following:
var pixels = new Uint8Array(CANVAS.width * CANVAS.height * 4);
gl.readPixels(0, 0, CANVAS.width, CANVAS.height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
pixels = new Float32Array(pixels.buffer);
// pixels now contains an array of floats, 1 float for each pixel
I'm adding my recent discoveries: Chrome will let you read floats, as part of the implementation defined format (search for "readPixels" in the spec), Firefox implements the WEBGL_color_buffer_float extension, so you can just load the extension and read your floats, I have not been able to read floats with Safari.