HTML5 Canvas Resize (Downscale) Image High Quality?
Suggestion 1 - extend the process pipe-line
You can use step-down as I describe in the links you refer to but you appear to use them in a wrong way.
Step down is not needed to scale images to ratios above 1:2 (typically, but not limited to). It is where you need to do a drastic down-scaling you need to split it up in two (and rarely, more) steps depending on content of the image (in particular where high-frequencies such as thin lines occur).
Every time you down-sample an image you will loose details and information. You cannot expect the resulting image to be as clear as the original.
If you are then scaling down the images in many steps you will loose a lot of information in total and the result will be poor as you already noticed.
Try with just one extra step, or at tops two.
Convolutions
In case of Photoshop notice that it applies a convolution after the image has been re-sampled, such as sharpen. It's not just bi-cubic interpolation that takes place so in order to fully emulate Photoshop we need to also add the steps Photoshop is doing (with the default setup).
For this example I will use my original answer that you refer to in your post, but I have added a sharpen convolution to it to improve quality as a post process (see demo at bottom).
Here is code for adding sharpen filter (it's based on a generic convolution filter - I put the weight matrix for sharpen inside it as well as a mix factor to adjust the pronunciation of the effect):
Usage:
sharpen(context, width, height, mixFactor);
The mixFactor
is a value between [0.0, 1.0] and allow you do downplay the sharpen effect - rule-of-thumb: the less size the less of the effect is needed.
Function (based on this snippet):
function sharpen(ctx, w, h, mix) {
var weights = [0, -1, 0, -1, 5, -1, 0, -1, 0],
katet = Math.round(Math.sqrt(weights.length)),
half = (katet * 0.5) |0,
dstData = ctx.createImageData(w, h),
dstBuff = dstData.data,
srcBuff = ctx.getImageData(0, 0, w, h).data,
y = h;
while(y--) {
x = w;
while(x--) {
var sy = y,
sx = x,
dstOff = (y * w + x) * 4,
r = 0, g = 0, b = 0, a = 0;
for (var cy = 0; cy < katet; cy++) {
for (var cx = 0; cx < katet; cx++) {
var scy = sy + cy - half;
var scx = sx + cx - half;
if (scy >= 0 && scy < h && scx >= 0 && scx < w) {
var srcOff = (scy * w + scx) * 4;
var wt = weights[cy * katet + cx];
r += srcBuff[srcOff] * wt;
g += srcBuff[srcOff + 1] * wt;
b += srcBuff[srcOff + 2] * wt;
a += srcBuff[srcOff + 3] * wt;
}
}
}
dstBuff[dstOff] = r * mix + srcBuff[dstOff] * (1 - mix);
dstBuff[dstOff + 1] = g * mix + srcBuff[dstOff + 1] * (1 - mix);
dstBuff[dstOff + 2] = b * mix + srcBuff[dstOff + 2] * (1 - mix)
dstBuff[dstOff + 3] = srcBuff[dstOff + 3];
}
}
ctx.putImageData(dstData, 0, 0);
}
The result of using this combination will be:
ONLINE DEMO HERE
Depending on how much of the sharpening you want to add to the blend you can get result from default "blurry" to very sharp:
Suggestion 2 - low level algorithm implementation
If you want to get the best result quality-wise you'll need to go low-level and consider to implement for example this brand new algorithm to do this.
See Interpolation-Dependent Image Downsampling (2011) from IEEE.
Here is a link to the paper in full (PDF).
There are no implementations of this algorithm in JavaScript AFAIK of at this time so you're in for a hand-full if you want to throw yourself at this task.
The essence is (excerpts from the paper):
Abstract
An interpolation oriented adaptive down-sampling algorithm is proposed for low bit-rate image coding in this paper. Given an image, the proposed algorithm is able to obtain a low resolution image, from which a high quality image with the same resolution as the input image can be interpolated. Different from the traditional down-sampling algorithms, which are independent from the interpolation process, the proposed down-sampling algorithm hinges the down-sampling to the interpolation process. Consequently, the proposed down-sampling algorithm is able to maintain the original information of the input image to the largest extent. The down-sampled image is then fed into JPEG. A total variation (TV) based post processing is then applied to the decompressed low resolution image. Ultimately, the processed image is interpolated to maintain the original resolution of the input image. Experimental results verify that utilizing the downsampled image by the proposed algorithm, an interpolated image with much higher quality can be achieved. Besides, the proposed algorithm is able to achieve superior performance than JPEG for low bit rate image coding.
(see provided link for all details, formulas etc.)
Fast canvas resample with good quality: http://jsfiddle.net/9g9Nv/442/
Update: version 2.0 (faster, web workers + transferable objects) - https://github.com/viliusle/Hermite-resize
/**
* Hermite resize - fast image resize/resample using Hermite filter. 1 cpu version!
*
* @param {HtmlElement} canvas
* @param {int} width
* @param {int} height
* @param {boolean} resize_canvas if true, canvas will be resized. Optional.
*/
function resample_single(canvas, width, height, resize_canvas) {
var width_source = canvas.width;
var height_source = canvas.height;
width = Math.round(width);
height = Math.round(height);
var ratio_w = width_source / width;
var ratio_h = height_source / height;
var ratio_w_half = Math.ceil(ratio_w / 2);
var ratio_h_half = Math.ceil(ratio_h / 2);
var ctx = canvas.getContext("2d");
var img = ctx.getImageData(0, 0, width_source, height_source);
var img2 = ctx.createImageData(width, height);
var data = img.data;
var data2 = img2.data;
for (var j = 0; j < height; j++) {
for (var i = 0; i < width; i++) {
var x2 = (i + j * width) * 4;
var weight = 0;
var weights = 0;
var weights_alpha = 0;
var gx_r = 0;
var gx_g = 0;
var gx_b = 0;
var gx_a = 0;
var center_y = (j + 0.5) * ratio_h;
var yy_start = Math.floor(j * ratio_h);
var yy_stop = Math.ceil((j + 1) * ratio_h);
for (var yy = yy_start; yy < yy_stop; yy++) {
var dy = Math.abs(center_y - (yy + 0.5)) / ratio_h_half;
var center_x = (i + 0.5) * ratio_w;
var w0 = dy * dy; //pre-calc part of w
var xx_start = Math.floor(i * ratio_w);
var xx_stop = Math.ceil((i + 1) * ratio_w);
for (var xx = xx_start; xx < xx_stop; xx++) {
var dx = Math.abs(center_x - (xx + 0.5)) / ratio_w_half;
var w = Math.sqrt(w0 + dx * dx);
if (w >= 1) {
//pixel too far
continue;
}
//hermite filter
weight = 2 * w * w * w - 3 * w * w + 1;
var pos_x = 4 * (xx + yy * width_source);
//alpha
gx_a += weight * data[pos_x + 3];
weights_alpha += weight;
//colors
if (data[pos_x + 3] < 255)
weight = weight * data[pos_x + 3] / 250;
gx_r += weight * data[pos_x];
gx_g += weight * data[pos_x + 1];
gx_b += weight * data[pos_x + 2];
weights += weight;
}
}
data2[x2] = gx_r / weights;
data2[x2 + 1] = gx_g / weights;
data2[x2 + 2] = gx_b / weights;
data2[x2 + 3] = gx_a / weights_alpha;
}
}
//clear and resize canvas
if (resize_canvas === true) {
canvas.width = width;
canvas.height = height;
} else {
ctx.clearRect(0, 0, width_source, height_source);
}
//draw
ctx.putImageData(img2, 0, 0);
}
Since your problem is to downscale your image, there is no point in talking about interpolation -which is about creating pixel-. The issue here is downsampling.
To downsample an image, we need to turn each square of p * p pixels in the original image into a single pixel in the destination image.
For performances reasons Browsers do a very simple downsampling : to build the smaller image, they will just pick ONE pixel in the source and use its value for the destination. which 'forgets' some details and adds noise.
Yet there's an exception to that : since the 2X image downsampling is very simple to compute (average 4 pixels to make one) and is used for retina/HiDPI pixels, this case is handled properly -the Browser does make use of 4 pixels to make one-.
BUT... if you use several time a 2X downsampling, you'll face the issue that the successive rounding errors will add too much noise.
What's worse, you won't always resize by a power of two, and resizing to the nearest power + a last resizing is very noisy.
What you seek is a pixel-perfect downsampling, that is : a re-sampling of the image that will take all input pixels into account -whatever the scale-.
To do that we must compute, for each input pixel, its contribution to one, two, or four destination pixels depending wether the scaled projection of the input pixels is right inside a destination pixels, overlaps an X border, an Y border, or both.
( A scheme would be nice here, but i don't have one. )
Here's an example of canvas scale vs my pixel perfect scale on a 1/3 scale of a zombat.
Notice that the picture might get scaled in your Browser, and is .jpegized by S.O..
Yet we see that there's much less noise especially in the grass behind the wombat, and the branches on its right. The noise in the fur makes it more contrasted, but it looks like he's got white hairs -unlike source picture-.
Right image is less catchy but definitively nicer.
Here's the code to do the pixel perfect downscaling :
fiddle result :
http://jsfiddle.net/gamealchemist/r6aVp/embedded/result/
fiddle itself : http://jsfiddle.net/gamealchemist/r6aVp/
// scales the image by (float) scale < 1
// returns a canvas containing the scaled image.
function downScaleImage(img, scale) {
var imgCV = document.createElement('canvas');
imgCV.width = img.width;
imgCV.height = img.height;
var imgCtx = imgCV.getContext('2d');
imgCtx.drawImage(img, 0, 0);
return downScaleCanvas(imgCV, scale);
}
// scales the canvas by (float) scale < 1
// returns a new canvas containing the scaled image.
function downScaleCanvas(cv, scale) {
if (!(scale < 1) || !(scale > 0)) throw ('scale must be a positive number <1 ');
var sqScale = scale * scale; // square scale = area of source pixel within target
var sw = cv.width; // source image width
var sh = cv.height; // source image height
var tw = Math.floor(sw * scale); // target image width
var th = Math.floor(sh * scale); // target image height
var sx = 0, sy = 0, sIndex = 0; // source x,y, index within source array
var tx = 0, ty = 0, yIndex = 0, tIndex = 0; // target x,y, x,y index within target array
var tX = 0, tY = 0; // rounded tx, ty
var w = 0, nw = 0, wx = 0, nwx = 0, wy = 0, nwy = 0; // weight / next weight x / y
// weight is weight of current source point within target.
// next weight is weight of current source point within next target's point.
var crossX = false; // does scaled px cross its current px right border ?
var crossY = false; // does scaled px cross its current px bottom border ?
var sBuffer = cv.getContext('2d').
getImageData(0, 0, sw, sh).data; // source buffer 8 bit rgba
var tBuffer = new Float32Array(3 * tw * th); // target buffer Float32 rgb
var sR = 0, sG = 0, sB = 0; // source's current point r,g,b
/* untested !
var sA = 0; //source alpha */
for (sy = 0; sy < sh; sy++) {
ty = sy * scale; // y src position within target
tY = 0 | ty; // rounded : target pixel's y
yIndex = 3 * tY * tw; // line index within target array
crossY = (tY != (0 | ty + scale));
if (crossY) { // if pixel is crossing botton target pixel
wy = (tY + 1 - ty); // weight of point within target pixel
nwy = (ty + scale - tY - 1); // ... within y+1 target pixel
}
for (sx = 0; sx < sw; sx++, sIndex += 4) {
tx = sx * scale; // x src position within target
tX = 0 | tx; // rounded : target pixel's x
tIndex = yIndex + tX * 3; // target pixel index within target array
crossX = (tX != (0 | tx + scale));
if (crossX) { // if pixel is crossing target pixel's right
wx = (tX + 1 - tx); // weight of point within target pixel
nwx = (tx + scale - tX - 1); // ... within x+1 target pixel
}
sR = sBuffer[sIndex ]; // retrieving r,g,b for curr src px.
sG = sBuffer[sIndex + 1];
sB = sBuffer[sIndex + 2];
/* !! untested : handling alpha !!
sA = sBuffer[sIndex + 3];
if (!sA) continue;
if (sA != 0xFF) {
sR = (sR * sA) >> 8; // or use /256 instead ??
sG = (sG * sA) >> 8;
sB = (sB * sA) >> 8;
}
*/
if (!crossX && !crossY) { // pixel does not cross
// just add components weighted by squared scale.
tBuffer[tIndex ] += sR * sqScale;
tBuffer[tIndex + 1] += sG * sqScale;
tBuffer[tIndex + 2] += sB * sqScale;
} else if (crossX && !crossY) { // cross on X only
w = wx * scale;
// add weighted component for current px
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// add weighted component for next (tX+1) px
nw = nwx * scale
tBuffer[tIndex + 3] += sR * nw;
tBuffer[tIndex + 4] += sG * nw;
tBuffer[tIndex + 5] += sB * nw;
} else if (crossY && !crossX) { // cross on Y only
w = wy * scale;
// add weighted component for current px
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// add weighted component for next (tY+1) px
nw = nwy * scale
tBuffer[tIndex + 3 * tw ] += sR * nw;
tBuffer[tIndex + 3 * tw + 1] += sG * nw;
tBuffer[tIndex + 3 * tw + 2] += sB * nw;
} else { // crosses both x and y : four target points involved
// add weighted component for current px
w = wx * wy;
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// for tX + 1; tY px
nw = nwx * wy;
tBuffer[tIndex + 3] += sR * nw;
tBuffer[tIndex + 4] += sG * nw;
tBuffer[tIndex + 5] += sB * nw;
// for tX ; tY + 1 px
nw = wx * nwy;
tBuffer[tIndex + 3 * tw ] += sR * nw;
tBuffer[tIndex + 3 * tw + 1] += sG * nw;
tBuffer[tIndex + 3 * tw + 2] += sB * nw;
// for tX + 1 ; tY +1 px
nw = nwx * nwy;
tBuffer[tIndex + 3 * tw + 3] += sR * nw;
tBuffer[tIndex + 3 * tw + 4] += sG * nw;
tBuffer[tIndex + 3 * tw + 5] += sB * nw;
}
} // end for sx
} // end for sy
// create result canvas
var resCV = document.createElement('canvas');
resCV.width = tw;
resCV.height = th;
var resCtx = resCV.getContext('2d');
var imgRes = resCtx.getImageData(0, 0, tw, th);
var tByteBuffer = imgRes.data;
// convert float32 array into a UInt8Clamped Array
var pxIndex = 0; //
for (sIndex = 0, tIndex = 0; pxIndex < tw * th; sIndex += 3, tIndex += 4, pxIndex++) {
tByteBuffer[tIndex] = Math.ceil(tBuffer[sIndex]);
tByteBuffer[tIndex + 1] = Math.ceil(tBuffer[sIndex + 1]);
tByteBuffer[tIndex + 2] = Math.ceil(tBuffer[sIndex + 2]);
tByteBuffer[tIndex + 3] = 255;
}
// writing result to canvas.
resCtx.putImageData(imgRes, 0, 0);
return resCV;
}
It is quite memory greedy, since a float buffer is required to store the intermediate values of the destination image (-> if we count the result canvas, we use 6 times the source image's memory in this algorithm).
It is also quite expensive, since each source pixel is used whatever the destination size, and we have to pay for the getImageData / putImageDate, quite slow also.
But there's no way to be faster than process each source value in this case, and situation is not that bad : For my 740 * 556 image of a wombat, processing takes between 30 and 40 ms.
If you wish to use canvas only, the best result will be with multiple downsteps. But that's not good enougth yet. For better quality you need pure js implementation. We just released pica - high speed downscaler with variable quality/speed. In short, it resizes 1280*1024px in ~0.1s, and 5000*3000px image in 1s, with highest quality (lanczos filter with 3 lobes). Pica has demo, where you can play with your images, quality levels, and even try it on mobile devices.
Pica does not have unsharp mask yet, but that will be added very soon. That's much more easy than implement high speed convolution filter for resize.