next.js: Bug: extremely high memory usage with `next dev`

Verify canary release

  • I verified that the issue exists in the latest Next.js canary release

Provide environment information

Operating System:
  Platform: android
  Arch: arm64
  Version: #2 SMP PREEMPT Fri Aug 5 15:52:33 AST 2022
Binaries:
  Node: 18.10.0
  npm: 8.19.2
  Yarn: 1.22.19
  pnpm: N/A
Relevant packages:
  next: 13.0.3-canary.0
  eslint-config-next: 13.0.2
  react: 18.2.0
  react-dom: 18.2.0

What browser are you using? (if relevant)

Chrome Canary v109.0.5400.0 (Android)

How are you deploying your application? (if relevant)

Vercel

Describe the Bug

The memory usage on the last next 13 (13.0.3-canary.0 and 13.0.2) releases are too high. next dev command uses about 1gb of ram. Next 12 is using about 300mb-500mb’s of ram.

Screenshot_20221104-225938_Termux

Expected Behavior

next dev should use less ram.

Link to reproduction

.

To Reproduce

  1. Create project with Next 13
  2. Run next dev
  3. Check the ram usage

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 37
  • Comments: 68 (18 by maintainers)

Commits related to this issue

Most upvoted comments

Another update:

  • good news: I merged two fixes, so hopefully the next canary will help.
  • bad news: there’s another memory leak that comes from Node itself which means that it’s gonna be very hard to fix. I’m currently exploring how we could work around the issue.
  • bad news 2: i think I saw another memory leak linked to the edge runtime, so I’ll have to investigate that too.

hey, sorry I was busy investigating the issue!

Small update: I managed to identify the 3 main sources of memory leaks that I think are causing the crashes in dev so I’ll work on fixing those quickly. This should help a lot hopefully.

Thanks @soylemezali42 for the test app, it was very very helpful 🙏.

Seems to constantly crash for me on Next13

image

Hey folks, next.js team member here. I’m investigating this issue. Would anyone care to share with me privately their projects so I can investigate properly? Or a public repro?

@feedthejim Thanks, it seems a combination of solutions you’ve implemented has eased the amount of crashing that occurs in the latest Canary release.

One thing that i find really strange is how slow my app takes to compile when switching pages during development mode,

Could this be some cause of the prefetching of links you described earlier? I can sometimes wait in excess of 8-10 seconds for a page change. (Turbo unfortunately doesn’t work for me)

In Production, this is a non-existent issue - lightning fast, but developing is really tedious at the moment.

The fix has landed on canary, please try it out and let me know if it makes your experience better.

We’re still blocked on the Node.js bug and we’re investigating how we can solve this but I don’t expect that it’ll be fixed fast 😕 .

Same issue here with latest canary. Just after start 2,5GB of ram for a rather simple app, and after a bit of time = crash.

Still happening 13.0.5-canary.4

still happening 13.0.6-canary.2

Still not fixed in 13.0.5

Still bugging for me also 13.0.4

@feedthejim Long story short, turns out the memory leak is not related to NextJS but Tauri (https://tauri.app/) caused by the tauri-plugin-persisted-scope.

I will open an issue there.

Thanks for your reply.

This is an extremely annoying problem. We are a website design agency. We migrated one of our websites to Next.js 13 (13.0.5) and it’s using more than 900MBs of RAM. The same site in ASP.NET Core Pages used less than 100 MBs.

Vercel, at least give us hints on how to debug this issue on our side.

Same problem here ! If a page have lot of <Link>, Next prebuilt all of these pages resulting high memory consumption and slow down of the app.

@adshodgson do you mind sharing a repro (preferably via codesanbox or similar tool)? I can take a look.

There’s a few things that can slow down dev times:

  • we changed the strategy around serving routes for the app router in dev, where we have to bundle server and client in order to render a page. This is not slow per-se but if you’re using really big packages like react-icons where it bundles a huge amount of files, this might affect you. I’m investigating ways we can make that better.
  • you might be doing some blocking work in a top-level layout/middleware file like @Alexandredc mentioned that may not be properly wrapped with a suspense boundary, something like:

async function Layout() {
  const somethingThatDoesntNeedToBlockRendering = await something();

  return <>
      {somethingThatDoesntNeedToBlockRendering && <Foo />
      {children}
  </>
}

// better alternative that leverages streaming

function SomethingThatDoesntNeedToBlockRendering() {
  return somethingThatDoesntNeedToBlockRendering && <Foo />
}

function Layout() {
  return <>
      <React.Suspense fallback={null}>
       <SomethingThatDoesntNeedToBlockRendering />
     </React.Suspense>
      {children}
  </>
}

Thanks for the feedback @adshodgson @soylemezali42!

@adshodgson I will surface that to the team… that’s definitely something on my mind as well.

@soylemezali42 can you expand more on not using yarn dev anymore? Also, is your other feedback related to page slowness?

So I just added something to relaunch the dev server whenever the memory gets too high… not a great fix but should help whilst I investigate.

There’s also one thing I forgot to mention to you folks: one thing that might unblock you is also to increase the max heap usable with NODE_OPTIONS=--max-old-space-size=6144. If you have more than enough ram available like me, that should also help (temporarily)

Another update:

  • good news: I merged two fixes, so hopefully the next canary will help.
  • bad news: there’s another memory leak that comes from Node itself which means that it’s gonna be very hard to fix. I’m currently exploring how we could work around the issue.
  • bad news 2: i think I saw another memory leak linked to the edge runtime, so I’ll have to investigate that too.

I have tested the app with canary.4. Sure, there is some improvement on the development server. But, the app is still being crashed after some navigation on the pages. By the way, Thanks to the vercel team for the quick improvements and replies.

@Nefcanto. This commit probably is going to resolve the issue. pretech development commit

@feedthejim sure, I can create a public repo that works with a public API. I need some time though to create it. Maybe a day.

I am on 13.0.4 and it still happened to me a couple of times today. This is the latest stack trace:

`<— Last few GCs —>

[21540:000001D236F10AA0] 1498088 ms: Mark-sweep 4065.1 (4138.6) -> 4053.0 (4142.6) MB, 270.3 / 0.1 ms (average mu = 0.208, current mu = 0.056) allocation failure scavenge might not succeed
[21540:000001D236F10AA0] 1498502 ms: Mark-sweep 4069.0 (4142.6) -> 4056.9 (4146.6) MB, 396.8 / 0.1 ms (average mu = 0.115, current mu = 0.040) allocation failure scavenge might not succeed

<— JS stacktrace —>

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory 1: 00007FF62B780AAF v8::internal::CodeObjectRegistry::~CodeObjectRegistry+124015 2: 00007FF62B70C866 v8::internal::wasm::WasmCode::safepoint_table_offset+64182 3: 00007FF62B70D8E2 v8::internal::wasm::WasmCode::safepoint_table_offset+68402 4: 00007FF62C041CE4 v8::Isolate::ReportExternalAllocationLimitReached+116 5: 00007FF62C02C2AD v8::SharedArrayBuffer::Externalize+781 6: 00007FF62BECF88C v8::internal::Heap::EphemeronKeyWriteBarrierFromCode+1468 7: 00007FF62BECC9A4 v8::internal::Heap::CollectGarbage+4244 8: 00007FF62BECA320 v8::internal::Heap::AllocateExternalBackingStore+2000 9: 00007FF62BEE8030 v8::internal::FreeListManyCached::Reset+1408 10: 00007FF62BEE86E5 v8::internal::Factory::AllocateRaw+37 11: 00007FF62BEFA68E v8::internal::FactoryBasev8::internal::Factory::AllocateRawArray+46 12: 00007FF62BEFD2CA v8::internal::FactoryBasev8::internal::Factory::NewFixedArrayWithFiller+74 13: 00007FF62BEFD523 v8::internal::FactoryBasev8::internal::Factory::NewFixedArrayWithMap+35 14: 00007FF62BD03B96 v8::internal::HashTablev8::internal::NameDictionary,v8::internal::NameDictionaryShape::EnsureCapacityv8::internal::Isolate+246 15: 00007FF62BD0193A v8::internal::Dictionaryv8::internal::NameDictionary,v8::internal::NameDictionaryShape::Addv8::internal::Isolate+58 16: 00007FF62BD09B66 v8::internal::BaseNameDictionaryv8::internal::NameDictionary,v8::internal::NameDictionaryShape::Add+118 17: 00007FF62BC16858 v8::internal::Runtime::GetObjectProperty+1720 18: 00007FF62C0CF9C1 v8::internal::SetupIsolateDelegate::SetupHeap+494417 19: 000001D23A365117 error Command failed with exit code 134.`

Release 13.0.4 seemed to had fixed it for me. You can update by doing:

npm i next@13.0.4 # npm
# or
yarn add next@13.0.4 # yarn

on your current project. It shouldn’t break stuff that’s already working. Also try to eliminate some dependencies that you don’t need or that have a lighter alternative.

Here’s the release.

Same for me. A couple of times a day i get this error. Restarting the dev server helps

Restarting the server helps but it’s annoying to restart the server everytime.