jimp: Image quality getting bad
Hi,
I am new to image optimization. I just want to optimize my images tomake them small in size and dimension, but maintaining quality. I used the following code:
new Jimp("img.jpg", function (err, image) {
if(!err){
var origImageDim = {width:this.bitmap.width,height:this.bitmap.height};
var dimensions = calculateImageDimensions(origImageDim.width,origImageDim.height,600,400);//Get proportionate dimensions
var imageClone = image.clone();
image.cover(dimensions.width, dimensions.height).write("destFolder/img.jpg", function(err, image){
if(!err){
console.log("Image optimised successfully");
}
else{
console.log(err);
}
}
)
}
}
function calculateImageDimensions(width,height,maxWidth,maxHeight){
// calculate the width and height, constraining the proportions
if (width > height) {
if (width > maxWidth) {
height = Math.round(height *= maxWidth / width);
width = maxWidth;
}
}
else {
if (height > maxHeight) {
width = Math.round(width *= maxHeight / height);
height = maxHeight;
}
}
return {width:width,height:height};
}
The problem i am facing is that the quality of the resultant image is getting bad and the size (KB’s) is getting bigger. Can you please help? Am I doing anything wrong?

About this issue
- Original URL
- State: closed
- Created 9 years ago
- Comments: 15 (1 by maintainers)
Hi, @canvas-manik. A lot of these methods are provided for you:
One thing you might not be aware of is the JPEG image quality (and compression) setting. Your source image appears to be at quality 70. Jimp saves at quality 100 (max) by default. Reducing this to 60 significantly brings down the KB size.
With regard to quality of appearance, any resize method is destructive and might affect appearance. Jimp uses a bi-cubic two-pass scaling algorithm, which is pretty good: https://en.wikipedia.org/wiki/Bicubic_interpolation But any scaling will (by definition) mess with your pixels.
Let me know if this works for you.
There’s two modes of scaling if you refer to the demo presented at http://taisel.github.io/JS-Image-Resizer/
One is bilinear, another is neither, but results in preserving the image crispness (it looks like nearest neighbor integer upscale+bilinear for a comparable end result). Downscaling is always the custom algorithm, since the algorithm is equivalent to multiple passes of bilinear limited to 2x downscaling in each pass, so it doesn’t miss a source input pixel. That’s emphasized in the demo with one of the images shown (context2d uses bilinear usually). If you downscaled with context2d (presuming bilinear) multiple times in a ratio of no more than 2x to get the same final image size, you’d get the same result as the custom algorithm.
I hope I explained the peculiarities here.