next.js: Next.js development high memory usage

Before posting a comment on this issue please read this entire post.

Previous work

The past few weeks we’ve been investigating / optimizing various memory usage issues. Specifically geared towards production memory usage. In investigating these we were able to find there was one memory leak in Node.js itself when using fetch() in Node.js versions before 18.17.0 (you’ll want to use 18.17.1 for security patches though).

Most of the reports related to memory usage turned out to be reports of “it’s higher than the previous version” rather than a memory leak. This was expected because in order to run App Router and Pages Router at the same time with different React versions two separate processes were needed. This has been resolved by reducing the amount of processes to two, one for Routing and App Router rendering, and one for Pages Router rendering. So far we haven’t received new reports since the latest release.

In some issues there were reports related to Image Optimization in production, however no reproduction was provided so it could not be investigated adequately, if you have a reproduction for that please refer to this issue: #54482

New

With the memory usage in production resolved we’ve started investigating reports of development memory usage spikes. Unfortunately these reports suffer from the same problem as the production memory usage issues people raised before, they’re full of comments saying same issue or posting screenshots of monitoring tools saying “Look, same issue”.

Unfortunately, as you can image, these replies are not enough to investigate / narrow down what causes the memory usage, for example in multiple cases that we did get a reproduction and could investigate the reason for the high memory usage was:

  • A bug in the application code, causing infinite looping components
  • Accidental import of ~11.000 modules through icon libraries – Yes, many icon libraries ship massive amounts of re-exports and those have to be bundled before it can be tree-shaken. We’ve been working on an automated way to split these up in #54572 that should help a bit to reduce the size (and compilation speed too).
  • Webpack customization, e.g. adding external libraries that change webpack settings, for example to write additional sourcemaps

So far I’ve been able to make one small change to webpack’s memory caching to make it garbage collect a bit more aggressively in #54397. I’m not expecting that change to have a big impact on the reported issues though.

We’d like to investigate these reports further, however we’re unable to narrow these down if there is no code to run to collect heap snapshots and profiles for, hence this issue. If you are able to please provide runnable code of what you’re experiencing.

Comments that don’t include runnable code will be automatically hidden in order to keep this issue productive. This includes comments that only have a screenshot and applications that can’t run.

I’m going to combine the other reports into this issue as separate comments.

I’ve made sure that we have 2-3 engineers on our team available to investigate when we get runnable reproductions to investigate.

Thanks in advance!

NEXT-1569

About this issue

  • Original URL
  • State: open
  • Created 10 months ago
  • Reactions: 95
  • Comments: 76 (13 by maintainers)

Commits related to this issue

Most upvoted comments

I’ve posted a reproduction here: https://github.com/limeburst/vercel-nextjs-54708

Start the development server, navigate from

  • http://localhost:3000/1
  • http://localhost:3000/2
  • http://localhost:3000/20

And watch the memory usage grow, until the server restarts.

I have pinned down the issue to next’s global fetch implementation. I’ve created a repo that is very simple and re-creates the memory leak and uses the same logic that by passes it by using node’s http module. There seems to be an issue surrounding PerformanceRequestTimers and fetchMetrics. More background can be found in issue #64212

@noetix Disagree that this had anything to do with bullying. All I did was explain that my posts are continuously being ignored, including in this issue. Then I explained what to do, which is to provide a runnable application. There is nothing we can do without a runnable example, which was already shared in the initial issue, I even made it bold to highlight that further.

Happy to explain it again, the reason we can’t do anything without runnable code is that in order to narrow down the memory usage we need to change the Next.js core code in the application, for example to disable client components compilation and such in order to narrow down where the memory usage comes from. There is no way to do that based on screenshots / messages / information you can provide as it would require countless hours of your time and our time (think 2 weeks full time at least) in order to investigate this.

The emoji reactions not being shown for off-topic marked posts is a bug in GitHub. As mentioned in the initial issue any posts that don’t include a reproduction will be automatically hidden.

Since you didn’t like the earlier explanation I’ll just remove it, don’t feel strongly about keeping the comment. Definitely wasn’t bullying, you were reading into that. Bullying would be the threats I’ve received recently from anonymous developers on Twitter that they’ll come visit my house soon…

@weyert we haven’t made changes to development memory usage besides the PR linked in the issue so really all I need is a reproduction, luckily @AhmedChabayta posted one, hopefully that is enough, fingers crossed.

@Thinkscape please open a separate issue, that bug would be separate from this issue 👍

I’ve had the same exact issue for weeks, I keep reading there is no reproduction on this issue, for me it’s pretty much running the getting started example using the App router, just create a dummy page that renders a div and load the page 1 single time and the memory almost doubles never releasing anything back. If you give it a few more spins, like 50-100 requests, it goes 1.5x-2x higher again. At some point it goes down by a small amount then it remains stable with slight memory increase just by idling and serving no requests at all, and it keeps going up till it eventually is OMM Killed.

Ran it a bunch of times through the memory snapshot inspection, even that tool always shows a 50-100Kbps increase at all times.

When everyone is reporting this exact same issue, it’s a bit irresponsible to say the least that there is no memory leak and just close the issue. I must say, posting screenshots with memory charts is not the most useful thing to repro this, but when 80% of people commenting experience the same issue, it’s saddens me to see the issue being denied or ignored. Again, I don’t use anything fancy, to decouple away from any potential leaky dependencies, I just slightly throw in some requests in the getting started example and the same exact memory increasing pattern repeats, whether it’s development or production mode.

At this point I am seriously considering ditching Next.js for my project and picking some other SSG and move the backend on something else.

For those who still suffering from using React-icons, cause of “Out of memory” issue, I have been searching for a solution and here is the best I got!

React-icons ships with huge amount of icons export from one file, like “react-icons/md” (even if you use one icon from it), For solving this issue, react-icons has another package that has separated files for each icon, called “@react-icons/all-files”, But unfortunately they stopped releasing it on NPM, because it became having huge amount of files, and NPM doesn’t accept that, instead you can install directly from GITHUB by using npm install https://github.com/react-icons/react-icons/releases/download/v4.11.0/react-icons-all-files-4.11.0.tgz, (or you can take the latest version from the releases page https://github.com/react-icons/react-icons/releases,

Well, now there’s one problem, for Developer experience, importing multiple icons from one path (one statement) is better, using destruction, But we can’t use that with @react-icons/all-files, so to solve this issue:

We can use modularizeImports, which transform the import from shape to another, here is what you should put into your next.config.js:

modularizeImports: { "react-icons/(\\w*)/?": { transform:@react-icons/all-files/{{ matches.[1] }}/{{ member }}, skipDefaultConversion: true, }, },|

Now you keep the project with the normal react-icons import, modularizeImports does all the magic.

And Yeah! that’s it, I hope that I helped anyone.

After doing some profiling, it seems like server modules are leaked across HMR in development, which in my company’s case leads to rapid growth of memory due to a large backend (see https://github.com/vercel/next.js/issues/62217, auto-closed unfortunately). Curious if anyone else can repro / confirm the same thing, or confirm theirs don’t leak.

I am facing this even with clean project. Just created a new project and a couple of libs (I am using nx) See recording

https://github.com/vercel/next.js/assets/54899662/d6d9833b-1937-4b91-98e4-13228f5c46f0

I guess on clean project with one page it doesn’t grow fast, but on bigger project it grows really fast. I am able to reproduce it by simply refreshing a page.

Reproducible example https://github.com/tar-aldev/next-nx-cypress Just run it with npm run nx serve comp-test.

Using

node v18.18.2
"next": "14.0.4"

Using node v20 doesn’t change anything.

This might be dumb, but I think I managed to solve the issue on my machine. After spending days suffering with this high memory usage issue, I just built a new next app and incremently added all the packages in my exisitng application.

The main problem was the way I was importing icons. I am using react-icons and lucide-react with shadcn/ui.

Problematic Way

export {
  BsFillDatabaseFill as DatabaseSolidIcon,
  BsDatabase as DatabaseOutlineIcon,
  BsFillCalendarCheckFill as CalendarCheckSolidIcon,
  BsCalendarCheck as CalendarCheckOutlineIcon,
  BsPeopleFill as PeopleSolidIcon,
  BsPeople as PeopleOutlineIcon,
  BsPersonCircle as PersonSolidIcon,
  BsArrowRight as ArrowRightIcon,
  BsArrowRightShort as ArrowRightShortIcon,
  BsTrash3 as TrashIcon,
} from "react-icons/bs";

export {
  HiOutlineUserCircle as UserCircleOutlineIcon,
  HiUserCircle as UserCircleSolidIcon,
} from "react-icons/hi";

export { IoPersonCircleSharp as PersonOutlineIcon } from "react-icons/io5";

export { PiSignOut as SignOutIcon } from "react-icons/pi";

export {
  BiChevronDown as ChevronDownIcon,
  BiChevronRight as ChevronRightIcon,
} from "react-icons/bi";

export { AiOutlineUserAdd as UserAddOutlineIcon } from "react-icons/ai";

export { BsXLg as XIcon } from "react-icons/bs";

My Fix

import {
  BsFillDatabaseFill,
  BsDatabase,
  BsFillCalendarCheckFill,
  BsCalendarCheck,
  BsPeopleFill,
  BsPeople,
  BsPersonCircle,
  BsArrowRight,
  BsArrowRightShort,
  BsTrash3,
} from "react-icons/bs";

import { HiOutlineUserCircle, HiUserCircle } from "react-icons/hi";

import { IoPersonCircleSharp } from "react-icons/io5";

import { PiSignOut } from "react-icons/pi";

import { BiChevronDown, BiChevronRight } from "react-icons/bi";

import { AiOutlineUserAdd } from "react-icons/ai";

import { BsXLg } from "react-icons/bs";

export {
  BsFillDatabaseFill as DatabaseSolidIcon,
  BsDatabase as DatabaseOutlineIcon,
  BsFillCalendarCheckFill as CalendarCheckSolidIcon,
  BsCalendarCheck as CalendarCheckOutlineIcon,
  BsPeopleFill as PeopleSolidIcon,
  BsPeople as PeopleOutlineIcon,
  BsPersonCircle as PersonSolidIcon,
  BsArrowRight as ArrowRightIcon,
  BsArrowRightShort as ArrowRightShortIcon,
  BsTrash3 as TrashIcon,
  HiOutlineUserCircle as UserCircleOutlineIcon,
  HiUserCircle as UserCircleSolidIcon,
  IoPersonCircleSharp as PersonOutlineIcon,
  PiSignOut as SignOutIcon,
  AiOutlineUserAdd as UserAddOutlineIcon,
  BiChevronDown as ChevronDownIcon,
  BiChevronRight as ChevronRightIcon,
  BsXLg as XIcon,
}

After making this change the number of modules compiled in dev mode dropped by about 50%! Everything seems to be working a lot better now and I hope things stays like this.

Hopefully this helps someone! 🚀

TLDR: The fix for us was converting our usages of next/image to using plain img tags.

We’ve been experiencing many similar issues. After deploying, server memory drops way down, and grows very quickly initially, then slows down but continues until we get an out of memory crash. We’ve tried a lot of the potential fixes here (upgrading, downgrading, changing node versions…). Screenshot 2023-09-25 at 12 12 19 PM

We have thousands of pages that next/image was being used on. It seems that the server is keeping a cache of these images, or some other related memory leak, but either way, converting away fixed the issue.

Our server memory after deploying has gone from this: Screenshot 2023-09-25 at 12 07 30 PM Screenshot 2023-09-25 at 12 13 20 PM

To this: Screenshot 2023-09-25 at 1 53 00 PM

For anyone else looking to convert until this is fixed, you can can inspect the next/image as currently implemented and copy over many of the styles to achieve the same look. Obviously a rudimentary conversion, since you don’t get the Next magic included, but better than a server crash. Screenshot 2023-09-25 at 12 27 07 PM

@timneutkens If any other info would help, let me know, and we’ll try to provide it

@feedthejim can confirm next@canary fixes the out of memory crash when using googleapis. thanks!

In the comment for the fix you mention In development, we want to split code that comes from ‘node_modules’ into their own chunks. This is because in development, we often need to reload the user bundle due to changes in the code… In our case we’re seeing the infinite memory crash occurring only in production under significant load and especially related to the undici fetch failures. Can a fix related to compilation be related to that? Does re-import the dependency tree on every page render?

My fix only applies in development so I wouldn’t expect changes there.

You can try the latest by installing next@canary btw.

Are there plans to do a Next release? Would love to try out the memory fixes in production

I’m getting ‘server out of memory’ after a while by letting the server run and writing/saving code that calls the following functions a few times.

import { google } from "googleapis";

export async function authSheets() {
  //Function for authentication object
  const auth = new google.auth.GoogleAuth({
    keyFile: "./auth/auth-sa-sptk.json",
    scopes: ["https://www.googleapis.com/auth/spreadsheets"],
  });

  //Create client instance for auth
  const authClient = await auth.getClient();

  //Instance of the Sheets API
  const sheets = google.sheets({ version: "v4", auth: authClient });

  return {
    auth,
    authClient,
    sheets,
  };
}

import { authSheets } from "./authSheets";

export async function clearSheetContents(sheetName) {
  console.log("sheet =", sheetName);
  const SHEET_ID = "123";
  const sheetId = SHEET_ID;
  const { sheets } = await authSheets();

  try {
    const result = await sheets.spreadsheets.values.clear({
      spreadsheetId: sheetId,
      range: sheetName,
    });
    console.log("result.data =", result.data);
  } catch (err) {
    // TODO (developer) - Handle exception
    throw err;
  }
}

import { authSheets } from "./authSheets";

// https://developers.google.com/sheets/api/guides/values
export async function setSheetValues(sheetName, input) {
  const SHEET_ID = "123";
  const sheetId = SHEET_ID;
  const values = [input];
  const resource = { values };
  // Updates require a valid ValueInputOption parameter
  const valueInputOption = "RAW"; // The input is not parsed and is inserted as a string.
  const { sheets } = await authSheets();

  try {
    const result = await sheets.spreadsheets.values.append({
      spreadsheetId: sheetId,
      range: sheetName,
      valueInputOption: valueInputOption,
      resource,
    });
    console.log("result.data =", result.data);
  } catch (err) {
    // TODO (developer) - Handle exception
    throw err;
  }
}

{
  "name": "test",
  "version": "0.1.0",
  "private": true,
  "scripts": {
    "dev": "next dev",
    "build": "next build",
    "start": "next start",
    "lint": "next lint"
  },
  "dependencies": {
    "autoprefixer": "10.4.15",
    "axios": "^1.5.0",
    "encoding": "^0.1.13",
    "eslint": "8.48.0",
    "eslint-config-next": "13.4.19",
    "googleapis": "^126.0.1",
    "next": "13.4.19",
    "postcss": "8.4.29",
    "react": "18.2.0",
    "react-dom": "18.2.0",
    "tailwindcss": "3.3.3"
  }
}

Node v. v18.17.1

In our case the problem manifest mostly when the site is using dynamic rendering. This already happens automatically (for every page) when you have any sort of login box in your layout that sets hearders / cookies. Sites without this and are thus statically pre-rendered don’t suffer from this big memory consumption, which is sort of understandable.

However the real problem is that the memory will not significantly be freed over time. Even after 24 hours doing nothing, then next day the memory would still sit at the same level. I’m not sure if this is directly related to Next JS or maybe Node JS itself. We are not experts on this subject.

Currently we are waiting for partial prerendering. https://nextjs.org/learn/dashboard-app/partial-prerendering. This looks promising and would potentially result in less consumption and therefore less server restarts.

If anyone has a solution for a dynamically rendered site (next 14) that prevents the memory getting full resulting in a forced reset please tell me your solution.

Its interesting that the downvote emojis on the bullying post have been removed.

All the dude did was ask how to collect data in a way that would be helpful. If collecting our own data is out of the question then that should have been the answer.

Poor form for someone representing Vercel.

This is not only an issue in development, but also production…

We’re using pages router with middleware and It seems like there is a memory leak in get cookies inside middleware.

Screenshot 2024-03-20 at 07 27 53

Same issue, and I’m also using Prisma like @lpbonomi. Next.js 14.0.4. The server starts, but then after a minute or two I get the FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory error. Happens in both dev and prod. Makes Next.JS completely unusable.

Out of memory problem has been an issue since the beginning of next 13 (or maybe even earlier). You can do all the optimalizations as presented here, they will help, but eventually the memory gets filled regardless and your pod will ultimately crash. Especially if you have a lot of dynamic parts. The solution we implemented (on prod) is to run at least 2 nodes and add a memory watch on the nodes that will restart the pod when a threshold is reached. (I suspect Vercel hosting has something similar). Without this it would have been a complete shit show since we have multiple crashes a day.

This does not fix your local development issue. The Next JS team has made some progress on this (since 13.4), and you server should not crash that soon (unless you’re stress testing it).

I’m having some issues when trying to use a server action with prisma.

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

'use server'

import { prisma } from '@/misc/prismaClientSingleton'

export async function subscribeNewsletter() {
  await prisma.user.findUnique({
    where: {
      id: 1,
    },
  })
}
import { PrismaClient } from 'database'

const prismaClientSingleton = () => {
  return new PrismaClient()
}

type PrismaClientSingleton = ReturnType<typeof prismaClientSingleton>

const globalForPrisma = globalThis as unknown as {
  prisma: PrismaClientSingleton | undefined
}

export const prisma = globalForPrisma.prisma ?? prismaClientSingleton()

if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma

I’ve tried to create a minimum reproduction repo but I couldn’t do it. I’m glad to help in any way

Unfortunately, no. And I got no response in neither the issue I created nor in Discord. To make it even worse I also reproduced it with App router.

On Fri, Nov 10, 2023, 7:07 AM William Scotten @.***> wrote:

Hello,

I’m experiencing an issue that might be related to the problem discussed in this Git issue. This issue is currently blocking my team from migrating our project to use the edge runtime.

The problem I’m facing involves a significant difference in RAM usage during compilation when using the experimental-edge runtime compared to the nodejs runtime. I encountered this issue when I created a fresh project using create-next-app with the pages router and then multiplied the default Next.js pages with different names and placed them in different directories, resulting in approximately 115 pages, which is nearly my project’s size.

On my laptop, when I build the project with the experimental-edge runtime, the RAM usage spikes from 10 GB to 15.8 GB (100%) during compilation, and the process takes approximately 1 minute to complete. However, when I use the nodejs runtime, the RAM usage only reaches around 11.4 GB at its peak, and the compilation finishes in about 10 seconds.

You can reproduce this issue by checking out the main branch, which uses the nodejs runtime, and the runtime-experimental-edge branch, which uses the edge runtime. I’ve also included some screenshots in the public folder of my memory monitor in the task manager for confirmation, in case they are relevant to troubleshooting this problem.

Node version: 18.18.2

Repository for reproduction: https://github.com/gvatsov/high-ram-example

[image: image] https://user-images.githubusercontent.com/1147010/276770768-b8319866-5f94-4aa1-a741-e283069a9bc0.png

Hey did you ever find an issue to this? This has been bugging me for a couple weeks now cause I have to restart every build to get it to work.

— Reply to this email directly, view it on GitHub https://github.com/vercel/next.js/issues/54708#issuecomment-1805095563, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIYBAXQFERMGW5A43GA323YDWZBLAVCNFSM6AAAAAA4CX4LH6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMBVGA4TKNJWGM . You are receiving this because you commented.Message ID: <vercel/next. @.***>

I’ve posted a reproduction here: https://github.com/limeburst/vercel-nextjs-54708

Start the development server, navigate from

  • http://localhost:3000/1
  • http://localhost:3000/2
  • http://localhost:3000/20

And watch the memory usage grow, until the server restarts.

I confirm. Navigate from http://localhost:3000/1 to http://localhost:3000/20.

- wait compiling /20/page (client and server)...
- warn ./node_modules/node-fetch/lib/index.js
Module not found: Can't resolve 'encoding' in 'D:\dev\next\vercel-nextjs-54708\node_modules\node-fetch\lib'        

Import trace for requested module:
./node_modules/node-fetch/lib/index.js
./node_modules/gaxios/build/src/gaxios.js
./node_modules/gaxios/build/src/index.js
./node_modules/googleapis-common/build/src/index.js
./node_modules/googleapis/build/src/index.js
./lib/util.ts
./app/17/page.tsx
- warn The server is running out of memory, restarting to free up memory.
TypeError: fetch failed
    at Object.fetch (node:internal/deps/undici/undici:11576:11)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async invokeRequest (D:\dev\next\vercel-nextjs-54708\node_modules\next\dist\server\lib\server-ipc\invoke-request.js:17:12)
    at async invokeRender (D:\dev\next\vercel-nextjs-54708\node_modules\next\dist\server\lib\router-server.js:254:29)
    at async handleRequest (D:\dev\next\vercel-nextjs-54708\node_modules\next\dist\server\lib\router-server.js:447:24)
    at async requestHandler (D:\dev\next\vercel-nextjs-54708\node_modules\next\dist\server\lib\router-server.js:464:13)
    at async Server.<anonymous> (D:\dev\next\vercel-nextjs-54708\node_modules\next\dist\server\lib\start-server.js:117:13) {
  cause: Error: connect ECONNREFUSED ::1:53698
      at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1595:16) {
    errno: -4078,
    code: 'ECONNREFUSED',
    syscall: 'connect',
    address: '::1',
    port: 53698
  }
}

i was building an app using the app directory, and i kept getting server out of memory in console, and some sort of input error in the browser, the server crashes and thats that, no errors beyond the ones mentioned, furthermore, the app was slow in loading, i created another app using pages this time, kept everything exactly the same just made it the proper pages format, and all the issues have gone. no more server out of memory, nor slow loading times. Next.js is and most probably will remain my personal favorite react framework, but this shit is ridiculous. maybe we get an explanation at least to why this is happening? i spent f** hours trying to find out what i did wrong, only to realize the problem isnt from my code.

i updated Nextjs to the latest version, updated node, used different versions of node via nvm, removed my code block by block to see if the issue is coming from some broken piece of code, same shit. the only solution was to move on from the app directory.

have the same issue ,when running npm run dev ,it freezes my browser i have to shutdown the laptop or close the vs code and the browser too.sometimes If I wanna test on the browser I have to kill one application i.e the vs code and view my changes in the browser when done kill the next server and continue coding.This is so borrowing . am using the following Node v18.17.1 "next" "13.4.3", "typescript": "5.0.4",

For pages which depend on auth state but irrelevant for SEO purposes just go for the good old client-side render. Basically pre-render the page with all auth-dependent components showing a placeholder and then resolve auth state upon hydration on client.

@AdamZajler Im in development using next 14.1.0, i just got the same issue. Sometimes i my computer getting freeze when i run npm start dev fyi i use 32gb of ram and 16 threads processor. this is my dependency list

"dependencies": {
    "@ant-design/icons": "^5.2.6",
    "@ant-design/nextjs-registry": "^1.0.0",
    "@apollo/client": "^3.8.10",
    "@apollo/experimental-nextjs-app-support": "^0.7.0",
    "@microsoft/signalr": "^8.0.0",
    "@tailwindcss/aspect-ratio": "^0.4.2",
    "@tailwindcss/container-queries": "^0.1.1",
    "@tailwindcss/typography": "^0.5.10",
    "accounting": "^0.4.1",
    "antd": "^5.13.2",
    "date-fns": "^3.3.1",
    "echarts": "^5.4.3",
    "next": "14.1.0",
    "numbro": "^2.4.0",
    "ramda": "^0.29.1",
    "react": "^18",
    "react-dom": "^18",
    "winston": "^3.11.0",
    "zod": "^3.22.4"
  },
  "devDependencies": {
    "@graphql-codegen/cli": "^5.0.0",
    "@graphql-codegen/client-preset": "^4.1.0",
    "@graphql-codegen/introspection": "^4.0.0",
    "@storybook/addon-essentials": "^7.6.10",
    "@storybook/addon-interactions": "^7.6.10",
    "@storybook/addon-links": "^7.6.10",
    "@storybook/addon-onboarding": "^1.0.11",
    "@storybook/blocks": "^7.6.10",
    "@storybook/nextjs": "^7.6.10",
    "@storybook/react": "^7.6.10",
    "@storybook/test": "^7.6.10",
    "@types/accounting": "^0.4.5",
    "@types/node": "^20",
    "@types/ramda": "^0.29.10",
    "@types/react": "^18",
    "@types/react-dom": "^18",
    "autoprefixer": "^10.0.1",
    "eslint": "^8",
    "eslint-config-next": "14.1.0",
    "eslint-plugin-storybook": "^0.6.15",
    "graphql": "^16.8.1",
    "postcss": "^8",
    "storybook": "^7.6.10",
    "tailwindcss": "^3.3.0",
    "typescript": "^5"
  }

We have detected that the solution from here is working. But we can’t resign from using next/image. Also we discovered that after disable CDN (assetPrefix), everything is fine: Zrzut ekranu 2023-11-17 o 09 17 40

Next.js version: 13.4.19

What a rollercoaster. I also use the lucide-react package, thought that’s the culprit. It fixed the build on my local machine after replacing it with SVG icons hosted in my codebase, but it was still crashing with a heap out of memory error when deploying to prod.

I’ve got an SST app with a NextJS site, with multiple pnpm workspaces. The issue was that I had different versions of zod in these workspaces (3.22.4 and 3.22.2). Updated all to be X.X.4 and the problem was no more.

Vercel deployment error Found this after trying to deploy the frontend to Vercel separately, it got stuck to the above problem instead of the out of memory error.

Hello,

I’m experiencing an issue that might be related to the problem discussed in this Git issue. This issue is currently blocking my team from migrating our project to use the edge runtime.

The problem I’m facing involves a significant difference in RAM usage during compilation when using the experimental-edge runtime compared to the nodejs runtime. I encountered this issue when I created a fresh project using create-next-app with the pages router and then multiplied the default Next.js pages with different names and placed them in different directories, resulting in approximately 115 pages, which is nearly my project’s size.

On my laptop, when I build the project with the experimental-edge runtime, the RAM usage spikes from 10 GB to 15.8 GB (100%) during compilation, and the process takes approximately 1 minute to complete. However, when I use the nodejs runtime, the RAM usage only reaches around 11.4 GB at its peak, and the compilation finishes in about 10 seconds.

You can reproduce this issue by checking out the main branch, which uses the nodejs runtime, and the runtime-experimental-edge branch, which uses the edge runtime. I’ve also included some screenshots in the public folder of my memory monitor in the task manager for confirmation, in case they are relevant to troubleshooting this problem.

Node version: 18.18.2

Repository for reproduction: https://github.com/gvatsov/high-ram-example

image

I can confirm that upgrading from next@13.4.20-canary.16 (and previous versions) to next@13.4.20-canary.35 fixed the memory issues for me that kept popping up in dev every few minutes.

Same here but after upgrading to the canary it started popping unexpected errors in production like this, I’m afraid to bump to 13.5.1 because there are also reports(#1, #2, #3) of the same problem in this new version

@MariuzM make sure that you test the latest canary (currently next@13.4.20-canary.31) from the Releases page

In the comment for the fix you mention In development, we want to split code that comes from 'node_modules' into their own chunks. This is because in development, we often need to reload the user bundle due to changes in the code.. In our case we’re seeing the infinite memory crash occurring only in production under significant load and especially related to the undici fetch failures. Can a fix related to compilation be related to that? Does Next re-import the dependency tree on every page render?

I’m running Node v18.10.0

Please see the first comment 🙏

In investigating these we were able to find there was one memory leak in Node.js itself when using fetch() in Node.js versions before 18.17.0 (you’ll want to use 18.17.1 for security patches though).

My comment wasn’t relevant, so I have deleted the content, thank you @kachkaev

You can use this code to see the issue. The server is getting aborted silently without any errors.

https://github.com/codelitdev/courselit/tree/tailwindcss-2

Logs

rajat@rajat-laptop:~/projects/courselit$ yarn dev
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env.local
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env
- ready started server on [::]:3000, url: http://localhost:3000
- event compiled client and server successfully in 545 ms (18 modules)
- wait compiling...
- event compiled client and server successfully in 263 ms (18 modules)
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env.local
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env.local
- info Loaded env from /home/rajat/projects/courselit/apps/web/.env
- wait compiling /404 (client and server)...
- wait compiling / (client and server)...
rajat@rajat-laptop:~/projects/courselit$

Upgrading from next@13.4.12 to next@13.4.19 didn’t solve the memleak, however it broke the server restart. In .12 I’d get the (too often) memory running out warning, then the server would restart and a refresh in the browser would work (system user mem usage would dip by a few GB after restart)

Thanks! Added both to our internal copy of this issue so that your keys don’t leak. We’ll take a look!

I understand your frustration with all the “same” comments, but I’m sorry to say I’m doing my best to be helpful here @timneutkens. I’m not an experienced enough developer to get to the bottom of this myself, and I was just offering the best I could to try and support by collecting data locally.

I’ll leave it to you and the pros.