next.js: [NEXT-841] FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
What version of Next.js are you using?
12.0.7
What version of Node.js are you using?
16.6.2
What browser are you using?
Chrome / safari
What operating system are you using?
Mac os
How are you deploying your application?
other
Describe the Bug
We have a monorepo with nx wherein we are using next for ssr We have been on next 11 and wanted to move to the next 12 with swc On doing so and making the neccessary changes, our app crashes with
We have tried adding more memory but we feel that the issue lies elsewhere
--- Last few GCs --->
[66122:0x7fe502d00000] 544670 ms: Mark-sweep (reduce) 4060.1 (4143.2) -> 4059.7 (4144.0) MB, 5936.8 / 0.1 ms (average mu = 0.080, current mu = 0.001) allocation failure scavenge might not succeed
[66122:0x7fe502d00000] 550506 ms: Mark-sweep (reduce) 4060.8 (4144.0) -> 4060.4 (4144.7) MB, 5834.7 / 0.1 ms (average mu = 0.042, current mu = 0.000) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0x108960ae5 node::Abort() (.cold.1) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
2: 0x1076563a9 node::Abort() [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
3: 0x10765651f node::OnFatalError(char const*, char const*) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
4: 0x1077d5137 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
5: 0x1077d50d3 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
6: 0x10798c0b5 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
7: 0x10798aa79 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
8: 0x107996c9a v8::internal::Heap::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
9: 0x107996d21 v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
10: 0x10796539c v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
11: 0x107d1680e v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
12: 0x10809fab9 Builtins_CEntry_Return1_DontSaveFPRegs_ArgvOnStack_NoBuiltinExit [/Users/n0s00jx/.volta/tools/image/node/16.6.2/bin/node]
13: 0x10c684c2e
14: 0x10c6847f5
Expected Behavior
Should work
To Reproduce
- upgrade to next 12.0.7 / 12.0.4 and try running the dev server
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 127
- Comments: 191 (45 by maintainers)
Commits related to this issue
- fix: Next js heap error - It's related to the following issue https://github.com/vercel/next.js/issues/32314 — committed to UniverseXYZ/UniverseApp-Frontend by taskudis 2 years ago
- Refactor Server Router (#39902) <!-- Thanks for opening a PR! Your contribution is much appreciated. In order to make sure your PR is handled as smoothly as possible we request that you follow the ... — committed to vercel/next.js by wyattjoh 2 years ago
- Memory improvements to static workers (#47823) Continue the work to improve #32314. This PR splits `webpack-build` into an `index` module and an `impl` module. The `index` module invokes `impl` b... — committed to vercel/next.js by shuding a year ago
Can you reopen this?
esmExternals: false
does not help in all cases (it does not in ours)esmExternals: false
is not a solution. Neither is increasing the memory. These are workarounds. This problem should not arise in the first placeI have the same problem:
In my case the project runs with docker-compose using the image
node:16.13-alpine3.14
, if the project is run on my machine (Intel Mac OS with Big Sur) it works fine, but within the container it crashes. Other infos that could help is that we usenext-transpile-modules
andtreat/webpack-plugin
in thenext.config.js
.I resolved my issue adding the following to my
next.config.js
:So I can confirm @ryne2010 theory with named exports connection to this issue. Over past week issue started rising again (We manage to tame it with dynamic imports on our reducers, since we do need them only at client atm).
This week our biggest page reached ~2min for first render. Generally issue is much worse on pages with dynamic routing, with static routing it’s still 5-15s.
Measurement results
I’ve started rewriting this page exports to default, with 5 measurements while rewriting. Got from original
Next.js-route-change-to-render: 126012ms
toNext.js-route-change-to-render: 30346ms
. So it’s about 4x improvement by just rewriting ~20 components from named export to default. There are still plenty more generic components (buttons, links…), hooks, selectors which are using named exports.Measurement details
"next": "12.1.5"
with SWR (but there was no difference with babel fallback in past)We can provide more information, however provide minimal reproducible repo is not possible since it’s scale with code quantity and we can’t provide all code because of NDA.
I’m seeing this pretty regularly with Next.js 13 and
appDir
. Duringnext dev
, memory usage grows steadily until an OOM happens, so I find myself regularly restartingnext dev
.Note: I never experienced this problem prior to using Next.js 13 and
appDir
, so I’m guessing this is related to the new experimental features — which means that it’s likely a separate issue from the original one in this thread.Here’s an example stack trace from
next dev
running on https://github.com/transitive-bullshit/next-movie:13.0.4
v16.18.0
Posting here because this issue is tagged with
please add a complete reproduction
, however let me know if I should be opening a new issue.I am able to consistently reproduce a build failure due to JS heap OOM, with a large amount of pages:
In most CI environments this will of course fail as well. The build passes linting, but fails on the ‘Creating an optimized production build’ step.
I can confirm that the repo submitted by @transitive-bullshit is also affected, it seems that most of the time webpack incrementally bumps the files watched and crashes. As mentioned above there is a bug when people are trying to use components living outside the
pages
and thenode_modules
directories.There are some local files used for caching in the
.next/cache
folder, but I cannot tell if that’s the case. Also, this issue is mentioned in the webpack-dev-server repo as well.All in all, I tried to find what’s wrong by watching for file or size changes, but still no luck. The only indicator is the console shouting for an enormous number of modules used (1000 modules for a simple 404 pages does not make sense). I would love to try to solve this one, but spotting the culprit is hard, the webpack configuration is humongous and spread across the next.js core package.
Maybe someone from the core team could actually shed some more light on, as this issue affects both the 12.x and the 13.x upstream releases. Fortunately, since moving towards the
app
folder is the new norm for the project, this issue will get more attention.I am facing the same issue in next.js version 13.0.3
Hi, we recently landed some changes (https://github.com/vercel/next.js/pull/37397) to
canary
that might help fix this issue, without settingesmExternals: false
.Please try it out by installing
next@canary
and let us know!reopened again!
Sorry about that, our stale bot accidentally closed it as it didn’t have the right labels.
We’re currently working on a refactor of how the server works that isolates running application code from Next.js itself. This will allow us to further narrow down memory issues
We recently upgraded to Next.js 12.1.5 and React 18.1.0 and encountered this issue when importing a component without direct default import syntax.
Importing a default function directly did NOT cause this memory issue
However, importing this way
via an “index.ts” file in ‘@components/input’ containing
DID cause this issue to occur.
I’ve since switched our import syntax for complex components to use default imports and have not had this issue occur.
Same situation here. It happens randomly after the update to 13.5.
Thanks to @federico-moretti setting
esmExternals: false
solves it for me. Even with the default ESM setting in Next12, not all the ESM packages I use compile well. I still had to use thenext-transpile-modules
library to support them.FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
Getting this after upgrading to Next 13.5
I’m also facing this issue on NextJS v13.0.1 and react v18.2.0. Faced a bunch of times during local dev already.
Here’s the dump:
It seems I had imported wrongly
import React from 'react'; import DashboardComponent from '../components/Dashboard/index' const Dashboard = () => { return ( <> <Dashboard/> </> ); } export default Dashboard;
instead of
import React from 'react'; import DashboardComponent from '../components/Dashboard/index' const Dashboard = () => { return ( <> <DashboardComponent/> </> ); } export default Dashboard;
so this fixed my issue and it’s building well now
the application now compiles if adding the
we also use external packages from our monorepo via
next-transpile-module
We had that same issue while deploying in docker on RHEL 7.9 Next.JS v12.0.4 NodeJS: v16.13.0
This is part of our original package.json
“scripts”: { “analyze”: “cross-env ANALYZE=true next build”, “dev”: “next dev”, “build”: "set NODE_OPTIONS=–max-old-space-size=8192 && next build ", “start”: “next start”, },
We tried changing the max-old-space size to 12192 but it didnt help, we raised it some more to 16192 and it started working again. This is what it looks like now
“scripts”: { “analyze”: “cross-env ANALYZE=true next build”, “dev”: “next dev”, “build”: "set NODE_OPTIONS=–max-old-space-size=16192 && next build ", “start”: “next start”, },
I got the same issue in Next.js 13.4.6. I didn’t face such issues in Next.js 12!
@balazsorban44 I shared a reproduction in July 2022 here: https://github.com/OscarBarrett/next-build-memory-investigation
@trakout shared a reproduction in December here: https://github.com/trakout/nextjs-many-pages
esmExternals
[FAILED]export NODE_OPTIONS=--max_old_space_size=4096
before start helps. [SUCCESS]It’s just on new ubuntu ec2 with 1Gb memory + 4Gb swap.
On MAC works fine without any magic.
Also started getting the error after 13.5 upgrade. Bumping heap size for build seemed to fix it
@shuding I tried out
13.2.5-canary.12
and compared with and without the experimentalwebpackBuildWorker
flag, but I sadly didn’t see any noticeable differences.Memory is usually stable when navigating between pages for us, or when performing a page reload or doing hot reloading. The problem we’re having usually shows up when you change some code, hot reloading kicks in, and then you do a hard page reload. At that point, memory goes up by about 400mb (depending on the page) and never goes down again. And devs do this a lot because hot reloading is generally something that does not get much trust 😕
Same issue in Next.js 13.0.5
I’m not seeing significant improvements between 12.1.6 and 12.2.2.
I did some profiling with 1000 generated components and 60 generated pages, no build cache, using node v16.13.1.
These pages all render the same thing (all the components inside a div), but may import the components differently.
The components are just divs with their component name as text.
12.1 - 1.85GB peak
12.2 - 2GB peak
Doubling the pages to 120:
12.1 - 3GB peak
12.2 - 2.8GB peak
Seems much higher than it should be? Or is this what should be expected?
Repo available here
Version 12.2 fixed our problem.
I can confirm that upgrading to nextjs 12.2 fixed the issue for us
I’m also having this issue.
esmExternals: false
does not help at all--max_old_space_size=8192
allows the server to boot, but it runs very slowly and after a few page loads all available RAM is consumed and the server crashesMy project:
For those using
next-transpile-module
, you might want to follow #35150.next@13.4.7
I know it sounds simple, but I solved this problem by removing the .next folder and running the build again. All methods mentioned here did not work. Only worked for my case was when I removed the folder.
Same than @EvilaMany . However, switching to Next 13 and the native
transpilePackages
option didn’t fix the issue. The only improvement is that now the server automatically restarts before crashing.As there are a lot of comments on this issue, I’m not sure if it was originally about
next build
ornext dev
. On our side, our issue is withnext dev
, where the memory keeps going up when reloading pages and the server crashes after a while. After upgrading to Next 13.2, the server gets restarted automatically but something is still wrong.@TkDodo
webpackBuildWorker
will mostly improve memory usage fornext build
. I’ll investigate the HMR-related case then!having same essue next 12.3.1-canary.2 my workaround just to make the build finalize i had to disable type checking during build for production its a bad idea but i dont have the choice at the moment. FYI mine works fine on dev mode only break during production build. this is what i have added to my
next.config.js
make it build in productionthe bug seams to be caused by type checking step. I insist its a bad idea am only doing this way because i know my code works ok during dev and no type errors are found there.
+1 for @ryne2010 theory with named exports.
We were facing exactly this issue that our production docker pods (on EKS, 8G mem) restarted over 60 times in a couple of days and setting
esmExternals: false
didn’t help. The codebase uses a mix of named exports and default exports which is obviously for no good reason ┐(゚~゚)┌After updating all default exports to named exports this problem disappeared.
Next.js version is 12.0.4.
I have the same issue with 12.1
And setting:
Does not change the issue
I’ll try to do a reproduction repo
I also use
next-transpile-module
@timneutkens I have the same problem after upgrading from v11 to v12. Used memory on Mac goes up to 2.5GB before it bombs. Previously it was consuming < 1GB on v11. Happy to provide a full reproduction to you via private github. I’m using
next-transpile-modules
(required foramcharts4
).Please anyone here knows the solution to the above problem, kindly help me out as I had been on this since four days ago
Hello @dbrxnds , i sadly no have any exact values. But i give a round about:
Present were at first 120 circular dependencies. Around 30 of them spanned more than 20 modules.
The good thing was that pretty much all were caused by one big bad barrell file. Simply by fully specifying the exact module (instead of only providing the barrel file path) in like 5 places already eliminated around 20 of those really long circular dependencies.
Afterwards I were already able to start the dev server again. Since I had already written above script to easily find and track these bad circulars I continued to eliminate more circular dependencies, starting with the longest ones.
I decreased them to now around 30 circulars. In the project there are still some circular dependencies present, but they only span 2 or 3 modules each - not ideal, but also not a dealbreaker.
The dev server is now very snappy again, a huge increase in developer experience!
Another benefit of this 2 hour excourse was that the average page bundle size decreased by 50%.
I’m facing also the same problems with the version: “next”: “^12.3.1”, “react”: “18.2.0”, any workaround? thanks
A developer in our team experienced this issue recently. Everything worked ok in Next.js
13.2.4
butyarn dev
started crashing after an upgrade to13.3.0
/13.3.1
.Crash log
We use mui.com, so have a lot if named imports from
@mui/material
and@mui/icons-material
. These two entry points are quite bulky so may be challenging to parse without bloating memory consumption. Thankfully, we managed to find a workaround like this: https://github.com/vercel/next.js/discussions/37614#discussioncomment-3036716Hope this helps investigate the behavior of
swc
Still an issue with latest (13.3.1)
Not yet, seems it’s not related to
next-transpile-module
since I useexperimental.transpilePackages
then see the issue still persist. The dev compile is quite slow too. When I build my app. NextJS show this weird result:@balazsorban44 I’ve following this conversation since the beginning. We had serious deployment issues in our staging server, which has a pretty limited memory (Amazon EC2 T2 small instance t2.small - 2Gb RAM) - shared with our JAVA back-end as well. Our start-up was failing, even with the mentioned solutions above.
I’ve tried the canary version
12.1.7-canary.4
. And it seems our issues has been eliminated. More than ten deployments was success since we changed to this. Thank you!@johnson-lau @balazsorban44 problem is solved now for me after removing
esmExternals: false
and installingnext@canary
This seems to have fixed the issue in our environment. Any estimate on when this might roll into a stable version?
The only overlapping theme seems to be “unifying” the imports one way.
We encountered this issue and our app started to crash and restart loop just after we deleted our
.babelrc
and migrated our app to swc (with Next.js 12.1.4,compiler.styledComponents
and w/onext-transpile-modules
innode:16-alpine
). After reverted the change, everything backed to normal.Disabling
esmExternals
fix the issue too:npm i
add
"type": "module"
tonode_modules/@amcharts/amcharts4/package.json
andnode_modules/@amcharts/amcharts4-geodata/package.json
npm run build
@LukasBombach
Just to be clear and help this thread.
To suppress this error try
export NODE_OPTIONS=\"--max_old_space_size=4096 --trace-warnings\" && <your-node-command>
where is 4096 means how much RAM node can use, by default is 512MB, this must be in MB.Error happening after updating to v13.5.2
I have the same problem. None of the solutions helped me. I deleted the @mui/icons-material library, also deleted the .next folder and node_modules, then npm install again. The error has not gone away.
next 13.4.19, typescript 5.1.6
May be, I have encountered this issue with @mui/icons-material so far.
Personally, I think only the influence of @mui/icons-material was discharged, right?
I started to face this issue recently and I managed to pinpoint it to the Edge runtime. See #51298
We think this issue manifested by adding carbon multi select but not adding to
next-transpile-modules
.We added it in additon to
The difference is astounding
before
after
I started having this issue on an M1 MacBook Pro with 16GB ram running:
Full error output:
I also have the same issue;
dev
keeps taking more memory and then crashes+auto restarts.build
also takes too much memory and causes the system to crash.btw using the trace method as suggested and pruning the imports as solved this issue for me. I really cannot think everyone enough but ill try – THANK YOU!
Was struggling with this all day. I hope that sharing my experience will help others. After upgrading several packages, I was starting to get this error during build. In my case
yarn build
would take around 5 minutes (instead of the usual 1.5-2) to build the project and then fail with that (or similar) message. Ended up on this thread. I set the--max-old-space-size=16192
variable as suggested in a previous comment, and sure enough after 5 minutes I got a reference error. It wasnext-i18next
in my case. TheTrans
component had a redundantt
attribute, which created a circular reference (infinite type definition tree, or something to that effect, according toi18next
). Removing the attribute solved the problem. Now it works like before, without the need for the--max-old-space-size
variable. Good luck!Hi @ValentinH, my case is a bit different, I don’t have any API routes, just a health check API. The main reason for failing is that we have a ton of UI modules within a monorepo. I guess the memory leak is caused by the watchers for rebuilding the app on development mode. As a temporary workaround, you can tweak the Next.js compiler modules watcher. There are some movement from the maintainers so keep an eye on the canary releases.
I think It is highly related to this problem but #43859 didn’t solve the issue. But, the vercel team are sending a new fix(#43958) to the canary. Talks goes on here
Git action failed with heap limit Allocation failed when building the server
I tried to added
--max-old-space-size
to the docker file but it didn’t solve the issue I triedesmExternals: false,
it din’t workWhat worked for me is removing sentry
I had created the next.js application with the command
yarn create next-app --typescript
. After that when I ranyarn build
I got this error:Node
:v12.22.12
yarn
:1.18.0
next
:12.2.0
OS
:MacOS Monterey
Chip
:Apple M1 Max
Memory
:32 GB
After that I’ve changed my node version to
v14.19.3
and it worked!Set --max_old_space_szie=8192 seem to fix the issue for me.
I fixed mine, i notice that the name of my imported component was the same as the function name where i imported it to. and also i did, since am on kali linux 2022 i did export NODE_OPTIONS=–max_old_space_size=4096 and i was able to resolve it
In our case, we found out that the problem was caused by Swiper.js (https://swiperjs.com/) that started using ES modules from version 7. When we downgraded Swiper to v6, the problem disappeared.
We also use index.ts barrel files in our project for re-exporting all our components. Curiously, we found out that if we delete the reexports for the Swiper component and import it directly from its source file (e.g.
import { Carousel } from "src/components/Carousel/Carousel";
), the problem no longer exists and we can keep using Swiper v8. So this initially fixed the problem for us.Now we tried the next@canary version and it seems to fix the problem completely even if we import Swiper via barrel file. So everything is OK now.
@balazsorban44 this is without adding the
esmExternals
option or changing our code in any particular way in that regard@rahulgi and @patroza
in my case updating all default exports to named exports solved the issue. so i think it works either way, default or named, as long as there’s only one kind and not mixed exports in the codebase. “unifying” is the right term.
We also do experience this issue too on
v12
.esmExternals: false
didn’t help.It might be caused by some cyclic import some dependency during SSG based on info from other related issues getStaticProps with { fallback: true } is very slow when spa routing and Dev mode on Next.js v12 breaks with combination of features we also experience. It would be great if Next did detect those kind of issues because it is really hard to debug without access to implementation.
We did not experienced this kind of issues on
v10
so it might be also related to SWC.I have created a repository nextjs-dynamic-amcharts with the example of the smallest reproduction of the problem…
@balazsorban44 @timneutkens take a look.
@solaxds https://github.com/vercel/next.js/issues/32314#issuecomment-995223414
We tackled that, the dev team was mounting in the nodejs modules when they used docker-compose or our K8S tilt dev environments. We had to exclude those from all of our docker builds + only mounting source code since NextJS 12 SWC (Is that true @programbo?) compiler in rust is architecture-dependent when built. Either way we never faced that on 11.
make sure you have
node_modules/
in your .dockerignore or**node_modules/
in your .tiltignore.@pmbanugo not at the moment.
@balazsorban44 The error is persistent and consistent having said that, I had a couple of questions,
We were using the
babel loader
The procedure we followed to upgrade wasnow do we need to install @swc/core seperately, because i saw next install swc
Seems to be a duplicate of #31962