next.js: `next/image` memory leak?

Hi,

After the first release of our site in production with Next@v10 (v10.0.2 precisely), we noticed a gradual memory consumption pattern in our servers.

Edit: worth noting that we’ve skipped 9.5 and upgraded straight from 9.4. So it could be a 9.5 issue.

Edit: maybe not related to Image component, as we stopped using it, and still notice a gradual increase in memory consumption (which didn’t happen before v10). See this comment.

Edit: Jan 8th 2021: this is definitely related to Image component. We gave it a second chance, but had to rollback due to very high memory usage.

The only updated library in these releases was Next, and we do use the new Image component in a very limited set of images:

  • I estimate that no more than 50 images are being optimised with the Image component
  • All of them are above the fold (hero images, with a Max Age of 1 hour defined in the Cache Control header at the original source)
  • Our custom config in next.config.js:
  images: {
    domains: ['storage.googleapis.com'],
    deviceSizes: [500, 575, 750, 1150, 1500, 1920],
  },

Has anyone experienced this as well? Could it have any relation with the Image component? If so, it is worrying, as we are using it only for a very limited set of images, and we have plans to adopt it for a much larger set of images (e.g. product images).

As you can see in the image below, up until Next v10 the memory consumption was pretty steady. Let me know if we can provide some more details.

Next 10 releases

_Originally posted by @josebrito in https://github.com/vercel/next.js/discussions/19597_

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 27
  • Comments: 20 (4 by maintainers)

Commits related to this issue

Most upvoted comments

Hi, I’m the sharp maintainer and I’ve just been made aware that Next.js switched away from using it, due in part to this perceived memory leak.

The reports here look like the effects of memory fragmentation within the stock glibc-based Linux allocator and its inability to return freed memory to the OS. That’s why people who are using the jemalloc memory allocator or the musl-based Alpine are unaffected. There’s a lot of background and discussion about this at https://github.com/lovell/sharp/issues/955.

For those still using glibc-based Linux and older versions of Next.js that depend upon sharp, concurrency and therefore the likelihood of fragmentation can be manually controlled via e.g. sharp.concurrency(1) - see https://sharp.pixelplumbing.com/api-utility#concurrency

The forthcoming sharp v0.28.0 will detect which memory allocator is being used at runtime and limit concurrency if required - see https://github.com/lovell/sharp/issues/2607 - hopefully this will allow people to make a more informed choice about the most appropriate memory allocator for their scenario.

I would expect the new Wasm-based approach to exhibit a much higher peak memory requirement as entire decompressed images will be held in memory, possibly for longer periods than previously due to slower encoding performance. I notice there are comments in https://github.com/vercel/next.js/issues/22925 which would appear to confirm this.

As always, please do feel free to ask for help at the https://github.com/lovell/sharp repo if you’re unsure about the best way in which to use sharp for a given scenario. If you hadn’t seen it, there’s a meta-issue for the next release at https://github.com/lovell/sharp/issues/2604

Please try running next@canary, we’ve rewritten the image optimization to no longer use sharp and instead depend on the webassembly binaries included in squoosh.app. PR: #22253

Happened for my project as well, removed all next/image components for now for a quick fix. image

Regarding the high memory usage in 10.0.8 there’s a new bug report in #22925

@j-mendez Per the changelog #22253 is in v10.0.8. v10.0.9-canary.0 doesn’t seem to be related, but maybe I’m missing something?

Still seeing this on 10.0.8: modifying a ~7mb file makes the memory jump to 1.2Gb

Screen Shot 2021-03-07 at 16 53 00

Switching to libjemalloc helps. Had a huge memory leakage on Ubuntu 20.04. More info here: lovell/sharp#1803

How did you get libjemalloc1 on 20.04? All I can find is libjemalloc2 which crashes eventually in production.

I installed libjemalloc2 and created a pm2 config for setting env variables.

module.exports = {
  apps : [
      {
        name: "site",
        script: "yarn",
        args : "start",
        watch: true,
        env           : { 'LD_PRELOAD': '/usr/lib/x86_64-linux-gnu/libjemalloc.so.2', 'NODE_ENV': 'production' },
        env_production: { 'LD_PRELOAD': '/usr/lib/x86_64-linux-gnu/libjemalloc.so.2', 'NODE_ENV': 'production' },
      }
  ]
}