next.js: [NEXT-1143] Dev mode slow compilation

⚠️ this original post has been edited by @timneutkens to reflect this comment ⚠️

Changes in the past week

I’ve been investigating this over the past week. Made a bunch of changes, some make a small impact, some make a large impact. Here’s a list:

You can try them using npm install next@canary.

Help Investigate

In order to help me investigate this I’ll ideally need an application that can be run, if you can’t provide that (I understand if you can’t) please provide the .next/trace file.

If possible follow these steps which would give me the best picture to investigate:

  • npm install next@canary (use the package manager you’re using) – We want to make sure you’re using the very latest version of Next.js which includes the fixes mentioned earlier.
  • rm -rf .next
  • start development using the NEXT_CPU_PROF=1 and NEXT_TURBOPACK_TRACING=1 (regardless of if you’re using Turbopack, it only affects when you do use Turbopack) environment variable. E.g.:
    • npm: NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 npm run dev
    • yarn: NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 yarn dev
    • pnpm: NEXT_TURBOPACK_TRACING=1 NEXT_CPU_PROF=1 pnpm dev
  • Wait a few seconds
  • Open a page that you’re working on
  • Wait till it’s fully loaded
  • Wait a few seconds
  • Make an edit to a file that holds a component that is on the page
  • Wait for the edit to apply
  • Wait a few seconds
  • Make another edit to the same file
  • Wait a few seconds
  • Exit the dev command (ctrl+c)
  • Upload the CPU traces put in the root of the application directory to https://gist.github.com
  • Upload the .next/trace file to https://gist.github.com – Please don’t run trace-to-tree yourself, as I use some other tools (e.g. Jaeger) that require the actual trace file.
  • If you’re using Turbopack upload the .next/trace.log as well, if it’s too large for GitHub gists you can upload it to Google Drive or Dropbox and share it through that.
  • Upload next.config.js (if you have one) to https://gist.github.com
  • Share it here

Known application-side slowdowns

To collect things I’ve seen before that cause slow compilation as this is often the root cause:

  • If you’re on Windows, disable Windows Defender, it’s a known cause of extreme slowdowns in filesystem access as it sends each file to an external endpoint before allowing to read/write
  • Filesystem slowness overall is what we’ve seen as the cause of problems, e.g. with Docker
  • react-icons, material icons, etc. Most of these libraries publish barrel files with a lot of re-exports. E.g. material-ui/icons ships 5500 module re-exports, which causes all of them to be compiled. You have to add modularizeImports to reduce it, here’s an example: https://github.com/vercel/next.js/issues/45529#issuecomment-1531912811
  • Custom postcss config, e.g. tailwindcss with a content setting that tries to read too many files (e.g. files not relevant for the application)

This and other slowdown reports are currently the top priority for our team. We’ll continue optimizing Next.js with webpack where possible. The Turbopack team is currently working on getting all Next.js integration tests passing when using Turbopack as we continue working towards stability of Turbopack.

Original post

Verify canary release

  • I verified that the issue exists in the latest Next.js canary release

Provide environment information

Operating System:
      Platform: linux
      Arch: x64
      Version: #1 SMP Fri Jan 27 02:56:13 UTC 2023
    Binaries:
      Node: 18.13.0
      npm: 8.19.3
      Yarn: 1.22.18
      pnpm: 7.30.5
    Relevant packages:
      next: 13.3.1
      eslint-config-next: 13.3.1
      react: 18.2.0
      react-dom: 18.2.0

Which area(s) of Next.js are affected? (leave empty if unsure)

No response

Link to the code that reproduces this issue

https://github.com/DigitalerSchulhof/digitaler-schulhof

To Reproduce

Note that I have been unable to replicate this issue in a demo repository.

Describe the Bug

The issue is that Next.js is generally slow in dev mode. Navigating to new pages takes several seconds:

[next] ready - started server on 0.0.0.0:3000, url: http://localhost:3000
[next] info  - Loaded env from /home/jeengbe/dsh/digitaler-schulhof/.env
[next] warn  - You have enabled experimental feature (appDir) in next.config.js.
[next] warn  - Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.
[next] info  - Thank you for testing `appDir` please leave your feedback at https://nextjs.link/app-feedback
[next] event - compiled client and server successfully in 1574 ms (267 modules)
[next] wait  - compiling...
[next] event - compiled client and server successfully in 219 ms (267 modules)
[next] wait  - compiling /(schulhof)/Schulhof/page (client and server)...
[next] event - compiled client and server successfully in 3.6s (1364 modules)
[next] wait  - compiling /(schulhof)/Schulhof/(login)/Anmeldung/page (client and server)...
[next] event - compiled client and server successfully in 1920 ms (1411 modules)
[next] wait  - compiling /api/schulhof/auth/login/route (client and server)...
[next] event - compiled client and server successfully in 625 ms (1473 modules)
[next] wait  - compiling /(schulhof)/Schulhof/Nutzerkonto/page (client and server)...
[next] event - compiled client and server successfully in 1062 ms (1482 modules)
[next] wait  - compiling /(schulhof)/Schulhof/Nutzerkonto/Profil/page (client and server)...
[next] event - compiled client and server successfully in 1476 ms (1546 modules)
[next] wait  - compiling /(schulhof)/Schulhof/Nutzerkonto/Profil/Einstellungen/page (client and server)...
[next] event - compiled client and server successfully in 2.1s (1559 modules)

The only somewhat reasonable time would be 600ms for the API route /api/schulhof/auth/login/route, even though that is still quite a lot slower than what it should be given its size.

It also doesn’t look right to compile ~1500 modules for each page, as most of them should be cached. The pages are not very different.

Even an empty API route takes several hundreds of ms. The following example contains solely type exports:

[next] wait  - compiling /api/schulhof/administration/persons/persons/settings/route (client and server)...
[next] event - compiled successfully in 303 ms (107 modules)

I am not exactly sure how to read trace trees, but what stands out is that there are (over multiple runs) several entry next-app-loader that take 2+ seconds to complete:

│  │  ├─ entry next-app-loader?name=app/(schulhof)/Schulhof/page&page=/(schulhof)/Schulhof/page&appPaths=/(schulhof)/Schulhof/page&pagePath=private-next-app-dir/(schulhof)/Schulhof/page.tsx&appDir=/home/jeengbe/dsh/digitaler-schulhof/app&pageExtensions=tsx&pageExtensions=ts&pageExtensions=jsx&pageExtensions=js&rootDir=/home/jeengbe/dsh/digitaler-schulhof&isDev=true&tsconfigPath=tsconfig.json&assetPrefix=&nextConfigOutput=! 1.9s

Find both dev and build traces here: https://gist.github.com/jeengbe/46220a09846de6535c188e78fb6da03e

Note that I have modified trace-to-tree.mjs to include event times for all events.

It also seems unusual that none of the modules have child traces.

Expected Behavior

Initial load and navigating should be substantially faster.

Which browser are you using? (if relevant)

No response

How are you deploying your application? (if relevant)

No response

From SyncLinear.com | NEXT-1143

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Reactions: 146
  • Comments: 384 (117 by maintainers)

Commits related to this issue

Most upvoted comments

I’m realizing that all this hype about next 13 is more commercial than real.

700x faster than X 156156156X better than Y But in practice, even a hello world takes that long to load?

I read in some answer that it is intentional to render “on-deman” on the first visit of the page to save resources, CPU, etc… But I don’t need that, there are resources left… There’s no point in saying “✓ Ready in 3s” if when opening the link it starts to really work… ○ Compiling /page… ✓ Compiled /page in 7.4s (1061 modules) ✓ Compiled in 775ms (381 modules) ○ Compiling /api/auth/[…nextauth]/route … ✓ Compiled /api/auth/[…nextauth]/route in 5.8s (1043 modules) ✓ Compiled /(auth)/login/page in 991ms (1100 modules)

Isn’t there a way to create an approach like: when ready, start crawling and rendering the next links and pages close to the current one instead of standing still and doing nothing?

Or as a last resort, render everything! Once ready, let me start working and keep updating only what was changed…

This is so frustrating!

Same for me also dev env ,navigating to different pages via link component is pretty slow

Changes in the past week

I’ve been investigating this over the past week. Made a bunch of changes, some make a small impact, some make a large impact. Here’s a list:

You can try them using npm install next@canary.

Help Investigate

In order to help me investigate this I’ll ideally need an application that can be run, if you can’t provide that (I understand if you can’t) please provide the .next/trace file.

If possible follow these steps which would give me the best picture to investigate:

  • npm install next@canary (use the package manager you’re using) – We want to make sure you’re using the very latest version of Next.js which includes the fixes mentioned earlier.
  • rm -rf .next
  • start development using the NEXT_CPU_PROF=1 environment variable. E.g.:
    • npm: NEXT_CPU_PROF=1 npm run dev
    • yarn: NEXT_CPU_PROF=1 yarn dev
    • pnpm: NEXT_CPU_PROF=1 pnpm dev
  • Wait a few seconds
  • Open a page that you’re working on
  • Wait till it’s fully loaded
  • Wait a few seconds
  • Make an edit to a file that holds a component that is on the page
  • Wait for the edit to apply
  • Wait a few seconds
  • Make another edit to the same file
  • Wait a few seconds
  • Exit the dev command (ctrl+c)
  • Upload the CPU traces put in the root of the application directory to https://gist.github.com
  • Upload the .next/trace file to https://gist.github.com – Please don’t run trace-to-tree yourself, as I use some other tools (e.g. Jaeger) that require the actual trace file.
  • Share it here

Known application-side slowdowns

To collect things I’ve seen before that cause slow compilation as this is often the root cause:

  • If you’re on Windows, disable Windows Defender, it’s a known cause of extreme slowdowns in filesystem access as it sends each file to an external endpoint before allowing to read/write
  • Filesystem slowness overall is what we’ve seen as the cause of problems, e.g. with Docker
  • react-icons, material icons, etc. Most of these libraries publish barrel files with a lot of re-exports. E.g. material-ui/icons ships 5500 module re-exports, which causes all of them to be compiled. You have to add modularizeImports to reduce it, here’s an example: https://github.com/vercel/next.js/issues/45529#issuecomment-1531912811
  • Custom postcss config, e.g. tailwindcss with a content setting that tries to read too many files (e.g. files not relevant for the application)

This and other slowdown reports are currently the top priority for our team. We’ll continue optimizing Next.js with webpack where possible. The Turbopack team is currently working on getting all Next.js integration tests passing when using Turbopack as we continue working towards stability of Turbopack.

Hey all, I wanted to share an update on this after spending most of this week investigating slowdowns together with @shuding.

For Vercel’s application which we used for profiling the following list of changes had a big impact, reducing compile times for a page without caching (intentionally deleting the cache each run) by 53.48%. We’re still working on further investigation and improvements as there is still more room to improve. Would appreciate if you’re able to upgrade to next@canary (npm install next@canary) and run the steps to provide the CPU profile and trace again (steps here: https://github.com/vercel/next.js/issues/48748#issue-1680013792). That would allow us to confirm the changes have the intended impact on your application too, as we’ve been using Vercel’s application to confirm these improvements.

Both App and Pages

  • https://github.com/vercel/next.js/pull/51905
    • The big one, this shaved off over a second on Vercel’s application, likely even more as we can’t measure the total time saved from not blocking the main thread. I’ve seen this one being a slowdown on the apps we got traces for as well, e.g. 500ms-1s as well on much smaller applications.
    • Checked this on @VanTanev’s CPU profile and it saves 3.52 seconds
    • Checked this on @SuttonJack’s CPU profile and it saves 7.64 seconds
  • https://github.com/vercel/next.js/pull/50900
    • Default modularizeImports that improve performance when certain libraries are used. E.g. @mui/icons-material @mui/material date-fns lodash lodash-es ramda react-bootstrap. When you use those significantly less modules will be compiled.
  • https://github.com/vercel/next.js/pull/51835
    • Similar to #51589, it’s a smaller improvement based on the amount of modules
  • https://github.com/vercel/next.js/pull/51879
    • All compilers now share the same input filesystem, which means that they can share cached stat and readFile between them. Improved performance on Vercel’s app by 300-400ms on initial compiles
  • https://github.com/vercel/next.js/pull/51851
    • Enabled gzip compression of the webpack cache, this helps with reducing disk usage on larger applications as these caches can get quite big. Small performance hit but it’s won back in fs.readFile being faster for the file. So more efficient disk space usage.
  • https://github.com/vercel/next.js/pull/51785
    • Small change that speeds up startup by 68ms for all applications, likely more than 68ms on slower machines.

What I did before out of office:

  • https://github.com/vercel/next.js/pull/50379
    • Reduced bootup compilation by deferring the runtime for app or pages to the first request for either, this allows us to avoid compiling the pages runtime when app is only used

App Only

What I did before out of office:

I love next, but this is a complete show stopper. Sometimes it takes 10+ seconds outside of docker for me on a Mac M2 to navigate one page.

This is insane.

The problem with this issue (and any issues around memory usage) is that for the majority of people posting are all running into different problems. If you look at my earlier replies you’ll notice some patterns: misconfiguration of TailwindCSS, additional libraries added (i.e. Datadog) that instrument Node.js internals and then severely slow down the application, and many other examples of this every application is somewhat unique problem.

In some cases it’s ~over 10K modules being imported, for example when people used material-ui/icons which publishes an incredible 11000 modules as a single re-exports file, many similar icon libraries do that as well.

As part of that we’ve implemented some very specific optimizations for these particular libraries that can expand them at the risk that this requires the library to be side-effect free, if it has side-effects those will not run as only the used exports are included.

This 11000 modules case was already a problem in Next.js 12 / Pages Router. The main reason you’re seeing that case take ~twice as long in that particular case with App Router is that in App Router all node_modules are bundled on the server-side too. You might be wondering why we would even make that change, it was working fine in Pages Router right? Well, there’s a few reasons:

  • You can now publish Server Components, Server Actions, and Client Components, in order for those to be handled correctly the compiler / bundler (in this case webpack) needs to know about them existing as there’s compile-time changes made to the module graph in order to make Client Components work. Without bundling you (and any other library author) would not be able to build reusable components that are shared on npm.
  • We’ve been doing extensive research into cold boots, optimizing the time to next start, boot on Vercel, boot on AWS Lambda, boot on Edge Functions, and found a big improvement to cold boots when bundling as much as possible, if you’re curious as to why it’s faster, well, each require synchronous and has to read the filesystem in Node.js. When it’s a single file you can skip quite some overhead.

If you search back in this issue you’ll find some of my earlier updates around the optimizations we’ve done in 13.4.8/9/10 (and later on too) that reduced most reports of slowdowns without a clear reason, i.e. people that had no large module graph. The reason those ran into this slowdown is that webpack has a heuristic for turning file watchers into directory watchers internally, as the amount of entrypoints / modules in the module graph increased webpack (watchpack to be precise) started converting thousands of file watchers to directory watchers, which is a slow process. This is what I ended up optimization.

There were a few other cases where the amount of file reads could be reduced which is what we’ve also implemented as part of those patch releases and some other optimizations.

Since landing those optimizations things have been mostly back to normal in terms of the CPU profiles that were provided, where the majority of complaints are coming from applications that:

  • Customized configuration, i.e. added webpack plugins / loaders that slow down the application
  • Instrumentation added, i.e. the datadog module in earlier messages
  • Larger module graphs, i.e. 10.000 modules or more, this case in particular is why we started building Turbopack in the first place, we found that Next.js applications keep growing bigger, especially as we’re supporting some of the largest websites (by rank/traffic) on the internet.

In looking into some of these there is another part to keep in mind that is being reported as “slowness”, which is that when you edit Server Components we have to fetch the server to re-render the entire page to get the new results, currently that means that when you edit Server Components it also triggers your data fetching, which can often look like “Next.js is slow”. For this case we’re exploring if we can automatically reuse the last data fetching result during re-render after Server Components change, that would remove data from the critical path for Server Components changes.

Our goal is indeed for Turbopack to be the default in Next.js so that the majority of applications benefits from faster development and production builds, we are running the entire Next.js test suite against the Turbopack implementation every day currently, we’ll be sharing a website soon where you can track the progress, currently at 87% of the development test suite and this number is increasing every day.

We still have some ideas / learnings from Turbopack that could be somewhat backported to webpack like leveraging the optimized CSS parser based on LightningCSS and a few changes to how entries are generated, but I’m not expecting that to feel sufficient when you’ve used Turbopack.

Overall keep the CPU profiles coming, it’s helpful to see them regardless to see if there’s other parts of Next.js / Webpack / Turbopack to optimize. @bfowle I’m expecting it to be a combination of something, I wasn’t able to reproduce the not writing both profiles but will ask someone on our team to take a look into it 🙏

If we look at video editing software as an analogy, I really love how Apple Final Cut Pro does it:

  • If the user is actively doing something, prioritize CPU for that.
  • If the app has an idle moment, start rendering stuff in the background.

Before this, in video editing software, you had to manually click “render” to render all clips, or try and play that clip which would trigger a render just on that clip (this is similar to next dev right now)

The apple final cut pro method should be translated to the next compiler:

  • If nothing is happening, start background compiling pages.
  • If user is loading a route, stop background compiling and make sure the current one is compiled.
  • On file change, prioritize compiling those pages again in the bg.

I believe this would make next dev a lot faster/nicer to work with, but I’m not sure how complicated this would be to implement.

Last solution guys: Switch to Linux (windows sux).

A month later no updates on this? Makes development on appDir absolutely impossible. @timneutkens ?

Linked a bunch of related issues on this: https://github.com/vercel/next.js/issues/50332

I was out of office the past two weeks so sorry that there wasn’t an update in the meantime. We’ve made a few improvements while I was out:

  • PR: #50900 - Available on next@canary - Default modularizeImports for @mui/icons-material, @mui/material, date-fns, lodash, lodash-es, ramda, react-bootstrap
    • This means you no longer have to add the modularizeImports config for these libraries
  • PR: #51589 - Working on landing this tomorrow with Shu.
    • Reduce complexity of the RSC manifest
    • We saw a ~20-30% improvement with this change
  • PR: #51174 - Introduced NEXT_CPU_PROF=1 environment variable to write CPU profiles to disk
    • I’ve updated the steps here: https://github.com/vercel/next.js/issues/48748#issuecomment-1578374105 to reflect the new environment variable. This will allow us to investigate further than just the timing info that Next.js tracks by default
    • If you’ve already provided the trace file please provide the CPU profiles too if possible as it gives more granular insight 🙏

I’m going to have a look into the traces tomorrow 👍

It’s almost impossible to use dev mode in nextjs development due to incredibly slow compilation in dev mode that heavily affects the system too. Most of the times next goes out of memory, and the system freezes for a long time.

- event compiled client and server successfully in 31.2s (12025 modules)

We are not using dev mode anymore for development. our solution for now is:

$ yarn build && yarn start

This is the most critical problem for nextjs and I really hope there is a solution for it anytime soon.

I thought this would go without saying after my latest updates to this issue but apparently it does not so I’ll repeat it again:

If you’re on a version of Next.js before 13.4.8 please upgrade to next@latest, as earlier comments that were posted have shown that holds a massive improvement for many applications.

If you are still running into issues please follow the steps provided in the initial post, I’ve been keeping those steps up to date and I can’t make them any simpler, all you have to do is run your application with an environment variable and share the trace and cpu profiles: https://github.com/vercel/next.js/issues/48748

I’ll reiterate some known slowdowns which we can’t optimize much:

  • Using TailwindCSS with a content configuration that includes the entire filesystem
    • If the content configuration is set to match all possible files in e.g. a monorepo that means TailwindCSS will traverse and read all possible files during bootup, which is how the Tailwind JIT compiler works. They are working on an optimization where reading the files is async after I shared the CPU profiles with them. They’re also working on a new Rust-based compiler that does the crawling much faster, but still a wrong configuration could cause much more to be crawled than intended.
  • Using Sass
    • We’ve seen multiple traces / cpu profiles where the Sass compilation was quite slow / blocking the main thread.
  • Using extremely large barrel files
    • E.g. when you import react-icons or material-ui/icons you’d end up importing 11K JS files because of the way the package is published/consumed. We’ve added a default config for this and Ant Design as well as a few other libraries.
    • We’re still working on transforming lucide-react which shadcn/ui uses, that one is published in a non-trivial format so it’s not simple to convert import->path, we’ve worked additional modularizeImports configuration to make it work, will be landing soon, likely this or next week.

On the other hand there’s applications that upgraded, got a performance boost, and then found “It’s still slow”, specifically when opening a page for the first time / navigating to other pages. We’ve been looking at those CPU profiles too, generally I’ve replied saying I didn’t see anything large to optimize for those, generally those are larger applications, like 10K-modules or more. Those cases are exactly why we started building Turbopack. Before Next.js 13 was released there were reports of people building these larger applications (we realized later a bunch of these got bitten by material-ui/icons and such). Vercel’s application is similar to those larger applications, it requires a lot of files to be compiled, especially on the dashboard. So we started building Turbopack as a way to get to much faster “time to opening a page”. We’re still making progress on Turbopack and I’m excited about the early results there. Ultimately these larger applications need the new infrastructure in order to scale further, and that’s why we’ve invested so much time and energy into building Turbopack. That is not just about development speed but also scaling next build.

Upgrading to 13.4.8 yields major performance improvements for one of my largest Next.js apps. ~700 pages, 10k TS modules, 100+ contributors.

Some pages were previously taking 40-90 seconds on 13.4.7, compiling in ~5-7 seconds on 13.4.8.

Thanks all who worked on this! ▲

Solution: Version: 13.1 next.config:

   modularizeImports: {
        '@mui-material': {
            transform: '@mui-material/{{member}}'
        },
        '@mui/icons-material': {
            transform: '@mui/icons-material/{{member}}'
        }
    }

+1 its same here, hitting the page first time seems fine but routing via links gets stuck

Same issue here, 13.5.6 works amazingly fast (2-5seconds), but upgrading to next 14 dev compilation takes around 80-90 seconds.

In case it helps anyone, I tried a number of things to address this in a client project. The thing that sped things up for us ultimately was the combination of:

1. Migrating the app from app router to pages router (quite easy, if you keep the same filenames - it’ll also make going back easier when this is fixed in Next.js). 2. Removing all barrel imports in our codebase. 3. Adding one modularizeImports rule (for lodash), though I think this had minimal effect.

Before the above, average compile time was over 30s.

Moving back to pages router gave us a ~30% improvement (~20s compiles)

Removing imports did nothing when I did it in isolation; but combined with the migration back to pages router, it dropped average time to < 1s again. 🎉

Hope that helps for others running into this issue!

Has there been any updated guidance on how to proceed here? The performance has not seemed to improve from ~6mo of updates, still significantly slower than with pages router.

CleanShot 2024-01-31 at 20 15 42@2x

Is it normal for there to be that many modules?

My issue has been fixed. Switching from the app router to the pages router has significantly accelerated dev mode compilation. (Next.js@13.4.12).

I see that slow route changes in dev mode are showing a ‘[Fast Refresh] rebuilding’ message in the browser console. Sometimes it performs a full page reload when changing routes even if no files have been edited.

Turbopack for development is now available as release candidate in Next.js 14.2: https://nextjs.org/blog/next-14-2

@timneutkens is there a downside to use modularizeimports? its so hard to google this and there is like no good list of things that can cause issues etc. unless i missed something obvious.

I was wondering if there should be a goto list of libraries that are slow and how to modularize them.

Maybe we could create a repo for this. So people can collaborate with the best-known configurations to use Modularize Imports for each package.

I love next, but this is a complete show stopper. Sometimes it takes 10+ seconds outside of docker for me on a Mac M2 to navigate one page.

This is insane.

Yep even more I get sometimes 50 seconds in a simple page, that’s because is also building other things related to that in pralllel I guess.

not even outside docker, i just make a test to work outside docker and timing is exactly the same no difference…. Is getting slower and slower

its slowing down the development…!

Hello @timneutkens ! Following the comments in this discussion, I see that you have shared screenshots of few debugger tools which help visualize the trace file (apart from the existing trace-to-tree utility)

Some examples are these https://github.com/vercel/next.js/issues/48748#issuecomment-1640153357, https://github.com/vercel/next.js/issues/48748#issuecomment-1643457679

Are there any shared steps by which we can run similar analysis? I see one of them is Jaeger but I was not able to find a way to convert the .trace file to one which Jaeger understands.

Hey, I have the same issue using WSL2. Here is my trace file

I noticed that the first few changes work well, but after 5 - 10 minutes of making changes, the page will become stuck in a loading loop, even if next tells me that it compiled the pages successfully.

I also noticed that this issue becomes more and more visible the more files and components I have in the project.

i have tested running my next on multiple devices and operating systems for different projects, including , mac os, linux mint and windows and next is terribly slow, in fact a react project with webpack is way faster, i remember next js being fast, this slowness come sometime after the app router is introduced, can you please fix this shit, or use tools like vite, if the build is slow it is slow, we are actually experiancing it, stop giving us false numbers and promises

@jorgekorgut you can run scripts/send-trace-to-jaeger with cargo run /path/to/trace.txt which will put the trace in Jaeger, url to open it is at the start of the result.

Alternatively you can run node scripts/trace-to-tree.mjs /path/to/trace.txt which is more of a summary but easier to understand if you’re not familiar with the traces 👍

@kmvan updating next to 13.4.19 (the latest as of this writing) did actually help a lot. The dev server is back to normal. However, I experienced multiple times that the server exited gracefully automatically after a while. This happened multiple times. Then, it was possible to start the dev server but it will exit gracefully after trying to visit a page without the page loading. I looked at the processes and it turned out that the “next-render-worker-page” and “next-render-worker-app” were running multiple times.

image

After terminating all these “orphaned” processes, I was able to start and use the dev server again.

I’m able to reprodue this with my setup.

Apple Silicon users verify Rosetta

Just going to mention this one as I’ve just discovered it. Thought I was running on native, but it appears an bad node install has ruined it for quite some time. For Apple Silicon M1(2) users only.

Make sure you are NOT on Rosetta!

# Check shell
arch
# Then check node
node -p "process.arch"
# If arm64 you're good
# If x64 - This is very bad

If you are on x64 on a Apple Silicon your entire Node ecosystem is slowed down substantially. Steps to resolve:

  1. Wipe all global installs of node.
  2. Verify architecture is arm64 by typing arch in terminal
  3. IF using pnpm: 3.1. Wipe pnpm global env symlinks following their docs 3.2 Force install pnpm using their posix bash script on docs
  4. Install node via homebrew (make sure its aliased properly) or via Node website
  5. Verify again with node -p "process.arch"
  6. Clean install deps again with your package manager of choice

This trimmed my initial index compile time down from 2.9s to 1.5s

NOTE: If you were on Rosetta and enabled arm64 like above, every code base that uses native code will need a install to work properly.

This got bubbled up to me and @alexander-akait, is this some sort of webpack API issue? @timneutkens

I’ve not seen any core traces to understand the regression here, but if theres something webpack related that is blocking then just ping us.

Using turbopack is a game changer, 10-30s compile times down to 0.5s!

For those who want to try it, edit package.json:

"scripts": {
   ...
    "dev": "next dev",
   ...
}

to

"scripts": {
    ...
    "dev": "next dev --turbo",
    ...
}

I confirm that next app dir on dev mode and dynamic routing are very very slow on docker now

Also facing very slow compile time using latest canary (also tried stable v14), using the app router and my guess is that barrel files (at least in my case) are what is causing the slow down.

Trace gist: https://gist.github.com/srosato/f20401047281482e25b88b4b82b588c7 (I redacted paths since its a private projet)

Without Turbo

image

Using Turbo

Faster, but I suspect the fact that I have a lot of barrel files is still an issue even with optimizePackageImports option.

image

I use nx@17.2.8, tried with latest type-graphql@^2.0.0-beta.6.

Tried --turbo flag, but I get errors on TypeScript decorators with type-graphql on @Ctx() or even @Args().

View Error

Error

/home/srosato/dev/customer/[redacted]/apps/admin/.next/server/chunks/[project]_libs_api_src_62bcc4._.js:735
              @__TURBOPACK__imported__module__$5b$project$5d2f$node_modules$2f2e$pnpm$2f$type$2d$graphql$40$2$2e$0$2e$0$2d$beta$2e$6_class$2d$validator$40$0$2e$14$2e$1_graphql$2d$scalars$40$1$2e$22$2e$4_graphql$40$16$2e$8$2e$1$2f$node_modules$2f$type$2d$graphql$2f$build$2f$esm$2f$decorators$2f$index$2e$js__$5b$app$2d$rsc$5d$__$28$ecmascript$29$__$3c$facade$3e$__["Ctx"]()
              ^
SyntaxError: Invalid or unexpected token
  at internalCompileFunction (node:internal/vm:73:18)
  at wrapSafe (node:internal/modules/cjs/loader:1178:20)
  at Module._compile (node:internal/modules/cjs/loader:1220:27)
  at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
  at Module.load (node:internal/modules/cjs/loader:1119:32)
  at Module._load (node:internal/modules/cjs/loader:960:12)
  at Module.require (node:internal/modules/cjs/loader:1143:19)
  at mod.require (/home/srosato/dev/customer/[redacted]/ss-web-apps/node_modules/.pnpm/next@14.1.1-canary.38_@babel+core@7.22.20_react-dom@18.2.0_react@18.2.0/node_modules/next/dist/server/require-hook.js:65:28)
  at require (node:internal/modules/cjs/helpers:121:18)
  at loadChunkPath (/home/srosato/dev/customer/[redacted]/ss-web-apps/apps/admin/.next/server/chunks/[turbopack]_runtime.js:419:26)
  at Object.loadChunk (/home/srosato/dev/customer/[redacted]/ss-web-apps/apps/admin/.next/server/chunks/[turbopack]_runtime.js:407:16)
  at Object.<anonymous> (/home/srosato/dev/customer/[redacted]/ss-web-apps/apps/admin/.next/server/app/api/graphql/route.js:12:9)
  at Module._compile (node:internal/modules/cjs/loader:1256:14)
  at Module._extensions..js (node:internal/modules/cjs/loader:1310:10)
  at Module.load (node:internal/modules/cjs/loader:1119:32)
  at Module._load (node:internal/modules/cjs/loader:960:12)
  at Module.require (node:internal/modules/cjs/loader:1143:19)
  at mod.require (/home/srosato/dev/customer/[redacted]/ss-web-apps/node_modules/.pnpm/next@14.1.1-canary.38_@babel+core@7.22.20_react-dom@18.2.0_react@18.2.0/node_modules/next/dist/server/require-hook.js:65:28)
  at require (node:internal/modules/cjs/helpers:121:18)
  at requirePage (/home/srosato/dev/customer/[redacted]/ss-web-apps/node_modules/.pnpm/next@14.1.1-canary.38_@babel+core@7.22.20_react-dom@18.2.0_react@18.2.0/node_modules/next/dist/server/require.js:109:84)
  at /home/srosato/dev/customer/[redacted]/ss-web-apps/node_modules/.pnpm/next@14.1.1-canary.38_@babel+core@7.22.20_react-dom@18.2.0_react@18.2.0/node_modules/next/dist/server/load-components.js:74:84
  at async loadComponentsImpl (/home/srosato/dev/customer/[redacted]/ss-web-apps/node_modules/.pnpm/next@14.1.1-canary.38_@babel+core@7.22.20_react-dom@18.2.0_react@18.2.0/node_modules/next/dist/server/load-components.js:74:26)
  at async DevServer.findPageComponentsImpl (/home/srosato/dev/customer/[redacted]/ss-web-apps/node_modules/.pnpm/next@14.1.1-canary.38_@babel+core@7.22.20_react-dom@18.2.0_react@18.2.0/node_modules/next/dist/server/next-server.js:664:36) {
page: '/api/graphql'
}

Barrel files problem?

I use a monorepo with nx and lots of barrel files. Burning the barrel files would be a heavy task and would degrade DX by exposing internal file structure of a library (though arguably DX is already down with slow compile times).

Thing is, once the production build is compiled, it is very fast, thanks to tree-shaking and side-effects being set to false along with outputFileTracing, but during development, we are hitting limits. I recognize that splitting these into micro-services within the monorepo could help for code splitting, or even using dynamic imports, but barrel files remain a problem in development (I think, maybe I am wrong) that I cannot seem to solve.

I also use react-icons and also tried to use optimizePackageImports in an attempt at optimizing my barrel files like so:

// next.config.js

const { compilerOptions } = require('../../tsconfig.base.json');
const nextConfig = {
 // ...
 optimizePackageImports: Object.keys(compilerOptions.paths).filter((path) => {
      return !['@ss/db'].some((pkg) => path.startsWith(pkg));
    }),
 // ...
}

I have a lot of paths defined in my tsconfig.json.

View ts paths
    "paths": {
      "@admin/blocks": ["apps/admin/blocks/index.ts"],
      "@admin/components": ["apps/admin/components/index.ts"],
      "@admin/hooks": ["apps/admin/hooks/index.ts"],
      "@admin/layouts": ["apps/admin/layouts/index.ts"],
      "@admin/pages": ["apps/admin/pages/index.ts"],
      "@admin/routing": ["apps/admin/routing/index.ts"],
      "@admin/screens": ["apps/admin/screens/index.ts"],
      "@ss/api": ["libs/api/src/index.ts"],
      "@ss/api/edge": ["libs/api/src/edge.ts"],
      "@ss/auth": ["libs/auth/src/index.ts"],
      "@ss/blocks/clock-process": ["libs/blocks/clock-process/src/index.ts"],
      "@ss/blocks/date-range": ["libs/blocks/date-range/src/index.ts"],
      "@ss/blocks/employees-multiselect": ["libs/blocks/employees-multiselect/src/index.ts"],
      "@ss/blocks/error": ["libs/blocks/error/src/index.ts"],
      "@ss/blocks/jobs-multiselect": ["libs/blocks/jobs-multiselect/src/index.ts"],
      "@ss/blocks/login": ["libs/blocks/login/src/index.ts"],
      "@ss/blocks/logout": ["libs/blocks/logout/src/index.ts"],
      "@ss/blocks/password-reset": ["libs/blocks/password-reset/src/index.ts"],
      "@ss/blocks/reports": ["libs/blocks/reports/src/index.ts"],
      "@ss/blocks/safe-area": ["libs/blocks/safe-area/src/index.ts"],
      "@ss/blocks/search-input": ["libs/blocks/search-input/src/index.ts"],
      "@ss/blocks/work-weeks": ["libs/blocks/work-weeks/src/index.ts"],
      "@ss/component/action-sheet": ["libs/component/action-sheet/src/index.ts"],
      "@ss/component/alert": ["libs/component/alert/src/index.ts"],
      "@ss/component/avatar": ["libs/component/avatar/src/index.ts"],
      "@ss/component/button": ["libs/component/button/src/index.ts"],
      "@ss/component/clock-in-confirm": ["libs/component/clock-in-confirm/src/index.ts"],
      "@ss/component/date-picker": ["libs/component/date-picker/src/index.ts"],
      "@ss/component/dialog": ["libs/component/dialog/src/index.ts"],
      "@ss/component/dropdown": ["libs/component/dropdown/src/index.ts"],
      "@ss/component/form-input": ["libs/component/form-input/src/index.ts"],
      "@ss/component/hover": ["libs/component/hover/src/index.ts"],
      "@ss/component/icons": ["libs/component/icons/src/index.ts"],
      "@ss/component/indicator": ["libs/component/indicator/src/index.ts"],
      "@ss/component/jobs-table": ["libs/component/jobs-table/src/index.ts"],
      "@ss/component/links": ["libs/component/links/src/index.ts"],
      "@ss/component/loader": ["libs/component/loader/src/index.ts"],
      "@ss/component/location": ["libs/component/location/src/index.ts"],
      "@ss/component/logo": ["libs/component/logo/src/index.ts"],
      "@ss/component/masked-input": ["libs/component/masked-input/src/index.ts"],
      "@ss/component/metadata": ["libs/component/metadata/src/index.ts"],
      "@ss/component/pill": ["libs/component/pill/src/index.ts"],
      "@ss/component/popover": ["libs/component/popover/src/index.ts"],
      "@ss/component/select-input": ["libs/component/select-input/src/index.ts"],
      "@ss/component/settings-list": ["libs/component/settings-list/src/index.ts"],
      "@ss/component/slider": ["libs/component/slider/src/index.ts"],
      "@ss/component/storybook": ["libs/component/storybook/src/index.ts"],
      "@ss/component/table": ["libs/component/table/src/index.ts"],
      "@ss/component/text-field": ["libs/component/text-field/src/index.ts"],
      "@ss/component/time-picker": ["libs/component/time-picker/src/index.ts"],
      "@ss/component/timer": ["libs/component/timer/src/index.ts"],
      "@ss/component/timesheet": ["libs/component/timesheet/src/index.ts"],
      "@ss/component/toast": ["libs/component/toast/src/index.ts"],
      "@ss/component/toggle": ["libs/component/toggle/src/index.ts"],
      "@ss/component/typography": ["libs/component/typography/src/index.ts"],
      "@ss/data-import": ["libs/data-import/src/index.ts"],
      "@ss/date": ["libs/date/src/index.ts"],
      "@ss/date/tests": ["libs/date/src/test-helpers"],
      "@ss/db": ["libs/db/src/index.ts"],
      "@ss/db/edge": ["libs/db/src/edge.ts"],
      "@ss/debug": ["libs/debug/src/index.ts"],
      "@ss/domain": ["libs/domain/src/index.ts"],
      "@ss/environment": ["libs/environment/src/index.ts"],
      "@ss/environment-loader": ["libs/environment/src/environment-loader.ts"],
      "@ss/factories": ["libs/factories/src/index.ts"],
      "@ss/factories/jest": ["libs/factories/src/jest.ts"],
      "@ss/geocoder": ["libs/geocoder/src/index.ts"],
      "@ss/native-bridge": ["libs/native-bridge/src/index.ts"],
      "@ss/notifications": ["libs/notifications/src/index.ts"],
      "@ss/permissions": ["libs/permissions/src/index.ts"],
      "@ss/phone": ["libs/phone/src/index.ts"],
      "@ss/punch-clock": ["libs/punch-clock/src/index.ts"],
      "@ss/reports": ["libs/reports/src/index.ts"],
      "@ss/reports/server": ["libs/reports/src/api/server.ts"],
      "@ss/reports/tests": ["libs/reports/src/test-helpers"],
      "@ss/sdk": ["libs/sdk/src/index.ts"],
      "@ss/sdk/utils": ["libs/sdk/src/utils.ts"],
      "@ss/sdk/why-did-you-render": ["libs/sdk/src/why-did-you-render.tsx"],
      "@ss/settings": ["libs/settings/src/index.ts"],
      "@ss/state/admin": ["libs/state/admin/src/index.ts"],
      "@ss/state/common": ["libs/state/common/src/index.ts"],
      "@ss/state/crew": ["libs/state/crew/src/index.ts"],
      "@ss/state/foreman": ["libs/state/foreman/src/index.ts"],
      "@ss/testing": ["libs/testing/src/index.ts"],
      "@ss/testing/jest": ["libs/testing/src/jest.ts"],
      "@ss/testing/jest-e2e": ["libs/testing/src/jest-e2e.ts"],
      "@ss/testing/jest-node": ["libs/testing/src/jest-node.ts"],
      "@ss/theme": ["libs/theme/src/index.ts"],
      "@ss/theme/font": ["libs/theme/src/font.ts"],
      "@ss/time": ["libs/time/src/index.ts"],
      "@ss/timesheets": ["libs/timesheets/src/index.ts"],
      "@ss/timesheets/tests": ["libs/timesheets/src/test-helpers"],
      "@ss/utils": ["libs/utils/src/index.ts"],
      "@ss/vcs": ["libs/vcs/src/index.ts"]

Maybe I am doing something wrong there in order to help with barrel files, any hint welcomed.

Glad I am not the only one having such slow build time on dev, every time I change something, a lot of unchanged components are rebuilt anyway

Any updates to try out now even if it has bugs?

This is what im dealing with rn, using tailwind, react-icons, nextjs 13.5.4 and both have no issues with dev what so ever on a nextjs 12 app (pages):

Screenshot 2024-01-31 004028

literally can go make a cup of coffee every time I route to a new page, every compile job is taking an average of 60 seconds to resolve? have also tried other versions without success

Can this be related to react-icons/tailwind not next?

Screenshot 2024-01-16 at 4 20 42 PM We are also facing the same issue , it is taking more than 30 sec sometimes

In my side also got incredibly slow render just after switch from page router to app router (on next@14.0.4)

image

Don’t know how it’s possible but favicon.ico include 3244 modules… image

Is it possible to have a more verbose log to find out which of the 4616 modules is slow?

The sluggishness is really bad.

@timneutkens it seems like the only solution here is turbopack. Keen to understand what happens if some do not want to migrate to turbo pack? Why did it get slow with app router?

I’m able to reprodue this with my setup.

That is great, so I’m assuming you’re going to share that? As otherwise we can’t investigate.

Hi @timneutkens, I was wondering if there’s a way to enable tree shaking during development instead of depending on modularizeImports to exclude unused modules.

Certain libraries lack a consistent import format, and it doesn’t seem feasible for Next to continually maintain an expanding list of defaults. Any insights on this?

I’m doing some research into that direction already, but I can’t say we can definitely do that. There’re many difficulties to make it fully automatic, so for now we suggest you to configure it manually until we have a better solution.

We have exactly the same issue. Super slow the navigation, since we click the button to router.push it takes up to 7-9 secs the first time.

The problem with using TurboPack is that it still lacks some of the features like i.e i18n still, AFAIK, so not a silver bullet solution here.

Changed the initial post in this issue to reflect my reply above in order to ensure people see it as the first thing when opening the issue. I’m going to close the duplicate issues reporting similar slowdowns in favor of this one.

I’ll need help from you all to ensure this thread doesn’t spiral in “It is slow” comments that are not actionable when e.g. without traces / reproduction / further information. Thank you 🙏

Having the same issue here, in the Docker environment it’s come to a point where it’s almost unusable, and sometimes I even have to do a hard reload, after waiting too long for navigation. This is the case both with component from next/navigation, as with the router.push (useRouter hook imported from next/navigation). We’re using Next.js 13.4.2.

same here, it is almost not usable in dokcer enviorements, but also outside docker is very slow, something is not working nice. this is painfully slow.

Having the same issue here, in the Docker environment it’s come to a point where it’s almost unusable, and sometimes I even have to do a hard reload, after waiting too long for navigation. This is the case both with <Link> component from next/navigation, as with the router.push (useRouter hook imported from next/navigation). We’re using Next.js 13.4.2.

@JakeSc

As far as we know, optimizePackageImports is supposed to solve this, but react-icons doesn’t use a barrel file, it actually defines all icons in a single js file (albeit per category), but /all-files exports separate files. So either we misunderstood how optimizePackageImports works, or it actually doesn’t work on the default react-icons package. Which would be odd because according to the docs it is part of the default set of packages that get optimized.

edit: for the record, it improved our dev server only, it did not make a difference in production build time.

@timneutkens Thank you very much! Just like @fcristel I too have Bitdefender as my antivirus, and after adding the folder to the exceptions I went from around 18s compilation time to 7s. The navigation and updates are now near-instant.

I’m having some difficulty trying to understand this huge latency difference between a custom infrastructure (docker + K8) and Vercel without a clear pattern. the files and the difference is still tiny.

Maybe request/limits in custom deployment is too low - https://medium.com/pipedrive-engineering/how-we-choked-our-kubernetes-nodejs-services-932acc8cc2be

Also, Vercel infrastructure is placed over CDN, even without cached app responses, latency can be lower because round-trip is shorter.

Have you tried next dev --turbo on the latest version? Was it faster? If it did not work, what was the error?

index.tsx:926 Uncaught Error: ./node_modules/.pnpm/@smithy+node-http-handler@2.2.1/node_modules/@smithy/node-http-handler/dist-es/node-http2-connection-manager.js:1:0
Module not found: Can't resolve 'http2'
> 1 | import http2 from "http2";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^
  2 | import { NodeHttp2ConnectionPool } from "./node-http2-connection-pool";
  3 | export class NodeHttp2ConnectionManager {
  4 |     constructor(config) {

https://nextjs.org/docs/messages/module-not-found


    at Error: ./node_modules/.pnpm/ (smithy+node-http-handler@2.2.1/node_modules/@smithy/node-http-handler/dist-es/node-http2-connection-manager.js:1)
    at <unknown> (nextjs.org/docs/messages/module-not-found)
    at Object.getCompilationErrors (/Users/bookland/abg/dappling/node_modules/.pnpm/next@14.1.0_react-dom@18.2.0_react@18.2.0_sass@1.69.5/node_modules/next/dist/server/lib/router-utils/setup-dev-bundler.js:995:37)
    at DevBundlerService.getCompilationError (/Users/bookland/abg/dappling/node_modules/.pnpm/next@14.1.0_react-dom@18.2.0_react@18.2.0_sass@1.69.5/node_modules/next/dist/server/lib/dev-bundler-service.js:36:55)
    at DevServer.getCompilationError (/Users/bookland/abg/dappling/node_modules/.pnpm/next@14.1.0_react-dom@18.2.0_react@18.2.0_sass@1.69.5/node_modules/next/dist/server/dev/next-dev-server.js:585:42)
    at DevServer.findPageComponents (/Users/bookland/abg/dappling/node_modules/.pnpm/next@14.1.0_react-dom@18.2.0_react@18.2.0_sass@1.69.5/node_modules/next/dist/server/dev/next-dev-server.js:544:43)
    at async DevServer.renderErrorToResponseImpl (/Users/bookland/abg/dappling/node_modules/.pnpm/next@14.1.0_react-dom@18.2.0_react@18.2.0_sass@1.69.5/node_modules/next/dist/server/base-server.js:2063:26)
websocket.ts:37 [HMR] connected
client.js:25 ./node_modules/.pnpm/@smithy+node-http-handler@2.2.1/node_modules/@smithy/node-http-handler/dist-es/node-http2-connection-manager.js:1:0
Module not found: Can't resolve 'http2'
> 1 | import http2 from "http2";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^
  2 | import { NodeHttp2ConnectionPool } from "./node-http2-connection-pool";
  3 | export class NodeHttp2ConnectionManager {
  4 |     constructor(config) {

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@smithy+node-http-handler@2.2.1/node_modules/@smithy/node-http-handler/dist-es/node-http2-handler.js:3:0
Module not found: Can't resolve 'http2'
  1 | import { HttpResponse } from "@smithy/protocol-http";
  2 | import { buildQueryString } from "@smithy/querystring-builder";
> 3 | import { constants } from "http2";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  4 | import { getTransformedHeaders } from "./get-transformed-headers";
  5 | import { NodeHttp2ConnectionManager } from "./node-http2-connection-manager";
  6 | import { writeRequestBody } from "./write-request-body";

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@aws-sdk+token-providers@3.470.0/node_modules/@aws-sdk/token-providers/dist-es/writeSSOTokenToFile.js:2:0
Module not found: Can't resolve 'fs'
  1 | import { getSSOTokenFilepath } from "@smithy/shared-ini-file-loader";
> 2 | import { promises as fsPromises } from "fs";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  3 | const { writeFile } = fsPromises;
  4 | export const writeSSOTokenToFile = (id, ssoToken) => {
  5 |     const tokenFilepath = getSSOTokenFilepath(id);

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@smithy+shared-ini-file-loader@2.2.7/node_modules/@smithy/shared-ini-file-loader/dist-es/slurpFile.js:1:0
Module not found: Can't resolve 'fs'
> 1 | import { promises as fsPromises } from "fs";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  2 | const { readFile } = fsPromises;
  3 | const filePromisesHash = {};
  4 | export const slurpFile = (path, options) => {

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@smithy+util-body-length-node@2.1.0/node_modules/@smithy/util-body-length-node/dist-es/calculateBodyLength.js:1:0
Module not found: Can't resolve 'fs'
> 1 | import { fstatSync, lstatSync } from "fs";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  2 | export const calculateBodyLength = (body) => {
  3 |     if (!body) {
  4 |         return 0;

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@aws-sdk+credential-provider-web-identity@3.468.0/node_modules/@aws-sdk/credential-provider-web-identity/dist-es/fromTokenFile.js:2:0
Module not found: Can't resolve 'fs'
  1 | import { CredentialsProviderError } from "@smithy/property-provider";
> 2 | import { readFileSync } from "fs";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  3 | import { fromWebToken } from "./fromWebToken";
  4 | const ENV_TOKEN_FILE = "AWS_WEB_IDENTITY_TOKEN_FILE";
  5 | const ENV_ROLE_ARN = "AWS_ROLE_ARN";

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@smithy+shared-ini-file-loader@2.2.7/node_modules/@smithy/shared-ini-file-loader/dist-es/getSSOTokenFromFile.js:1:0
Module not found: Can't resolve 'fs'
> 1 | import { promises as fsPromises } from "fs";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  2 | import { getSSOTokenFilepath } from "./getSSOTokenFilepath";
  3 | const { readFile } = fsPromises;
  4 | export const getSSOTokenFromFile = async (id) => {

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less
client.js:25 ./node_modules/.pnpm/@aws-sdk+credential-provider-process@3.468.0/node_modules/@aws-sdk/credential-provider-process/dist-es/resolveProcessCredentials.js:2:0
Module not found: Can't resolve 'child_process'
  1 | import { CredentialsProviderError } from "@smithy/property-provider";
> 2 | import { exec } from "child_process";
    | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  3 | import { promisify } from "util";
  4 | import { getValidatedProcessCredentials } from "./getValidatedProcessCredentials";
  5 | export const resolveProcessCredentials = async (profileName, profiles) => {

https://nextjs.org/docs/messages/module-not-found
console.error @ client.js:25
Show 1 more frame
Show less

@leerob is there a solution to this issue or will it be a case of wait for turbo?

We are running next@canary and used modularizeImports for our icon import which helped a lot, but still having a painfully slow dev environment. Would appreciate your help in understanding why!

cpu profiles: https://gist.github.com/vigneshka/9eff1b8b54e0139d8149114ba10ac9f4 https://gist.github.com/vigneshka/b35bd82cfef0f54b21e2a9584b4226ab

next trace and config: https://gist.github.com/vigneshka/1aaf1b7082b485acc6437955675f6e28

@timneutkens , you mentioned you are using jaeger to visualize the trace. We would also be eager to learn how to do that. Have the jaeger ui locally setup, but not able to upload .next/trace

I saw a huge gain in performance since 13.4.8 but I will throw in my trace as well if it’s still needed: https://gist.github.com/Livog/302ea9a3c78552c093e9758bed5bfa68

I went from 2min -> ~9sec, and can’t really tell why but I love it.

@aleciavogel I looked at my trace file and figured out a few painpoints maybe it helps you or anyone here. It decreased compile times for me from being not useable to now faster than before with the preloading for link tags that it does.

"@heroicons/react/24/solid": { transform: "@heroicons/react/24/solid/{{member}}", }, "date-fns": { transform: "date-fns/{{member}}", }, "react-use": { transform: "react-use/esm/{{member}}", },

the interesting thing is that date-fns in particular was taking like 14sec alone to compile and all i used from that was format() while i suspected react-icons to be the culprit it wasn’t even close.

Device Info: Macbook 14inch m1 pro

I created a new sketch with Docker. It says /page directly. I can see this in the command line, but I waited a long time in the browser, no response.

image

image

I created a clean page. 234ms compiled. The browser responded in 1.3 minutes.

image

image

FROM node:18-alpine

WORKDIR /frontend

COPY ./frontend/package.json .

RUN yarn install
RUN yarn add next@canary

COPY ./frontend .

CMD ["yarn", "dev"]
frontend:
    build:
      context: ./src/
      dockerfile: Dockerfile.node18
    container_name: ${FRONTEND_CONTAINER_NAME}
    restart: unless-stopped
    links:
      - webserver
    volumes:
      - ./src/frontend:/frontend
      - node_modules-data:/frontend/node_modules

    ports:
      - '${FRONTEND_HTTP_PORT}:3000'

This is happen to me, I have tab menus, everytime i navigate to new tab/menu, it takes a while, and sometimes more longer. This is really bad development experience i got in app dir, its my first time using app dir. Look at the image, 15s!

image

I’m going to keep this issue actionable comment only so please do not pile on with comments that do not include traces / performance profiles.

You can find on how to provide traces here: https://github.com/vercel/next.js/issues/48748#issuecomment-1578374105 @feedthejim is working on adding a flag to also output CPU profiles which will allow narrowing down on potential causes further.

Thank you for addressing this @timneutkens!

However, the issue persists:

[next] - ready started server on 0.0.0.0:3000, url: http://localhost:3000
[next] - info Loaded env from /home/jeengbe/dsh/digitaler-schulhof/.env
[next] - warn You have enabled experimental feature (serverActions) in next.config.js.
[next] - warn Experimental features are not covered by semver, and may cause unexpected or broken application behavior. Use at your own risk.
// Initial load: (twice?)
[next] - event compiled client and server successfully in 178 ms (20 modules)
[next] - wait compiling...
[next] - event compiled client and server successfully in 101 ms (20 modules)
// Opening the page directly:
[next] - wait compiling /(schulhof)/Schulhof/(login)/Anmeldung/page (client and server)...
[next] - event compiled client and server successfully in 7.8s (1694 modules)
//                                                        ^^^^
[next] Warning: Each child in a list should have a unique "key" prop. See https://reactjs.org/link/warning-keys for more information.
// Making a change:
[next] - wait compiling...
[next] - event compiled client and server successfully in 958 ms (1684 modules)
// Making a change:
[next] - wait compiling...
[next] - event compiled client and server successfully in 524 ms (1684 modules)

Next Info:

    Operating System:
      Platform: linux
      Arch: x64
      Version: #1 SMP Fri Jan 27 02:56:13 UTC 2023
    Binaries:
      Node: 18.13.0
      npm: 8.19.3
      Yarn: 1.22.18
      pnpm: 8.6.0
    Relevant packages:
      next: 13.4.5-canary.6
      eslint-config-next: 13.4.4
      react: 18.2.0
      react-dom: 18.2.0
      typescript: 5.1.3

Trace: https://gist.github.com/jeengbe/f6b1bf54c04ab05bb40227c11fd90c7e

The issue can be replicated with the following repo: DigitalerSchulhof/digitaler-schulhof@82158e8 -> Copy .env.example -> pnpm dev -> Head to https://localhost/Schulhof/Anmeldung

@AhmedBaset correct, it’s an internal tool we’ve been building that we’ll be sharing at a later point, it’s not fully ready yet, but useful for us 🙂

Hey @timneutkens

Is turbo-trace-viewer.vercel.app a Vercel-internal? I can’t find how to upload a trace file

Thanks in advance

@iamgp could you please run Turbopack with NEXT_TURBOPACK_TRACING=1 (i.e. NEXT_TURBOPACK_TRACING=1 npm run dev (make sure the dev script includes --turbo in package.json) and share the .next/trace.log then we can take a look at the individual hot reloads for Turbopack and tell you why it’s slow.

Keep in mind that with App Router if you’re changing Server Components it means the page you’re on has to rerender which includes data fetching you’ve added, which can lead to the perception that the hot reload is slower while the actual compilation is fast and executing the code is slower because of the data fetches.

On the questions others have around canary. The canary channel is somewhat comparable to nightly releases, they’re not marked as stable. We’re planning to release a stable version of Next.js with everything that is currently on canary in the next 1-2 weeks.

Yes, next@canary also seems to be faster for me too, but I am also worried about whether this will be safe to use in prod? @timneutkens can you advise?

I have this issue as well, have tried node v18.17 and v20.10.0, on 14.2.0-canary.37 currently. Unable to generate a trace.

Cannot generate CPU profiling: Error [ERR_INSPECTOR_COMMAND]: Inspector error -32000: No recording profiles found
    at [onMessage] (node:inspector:95:29)
    at Connection.<anonymous> (node:inspector:69:56)
    at Session.post (node:inspector:141:28)
    at process.saveProfile (/Users/brent/projects/mk/socialshares/socialshares-ui/node_modules/next/dist/server/lib/cpu-profile.js:12:17)
    at process.emit (node:events:526:35)
    at process.exit (node:internal/process/per_thread:193:15)
    at /Users/brent/projects/mk/socialshares/socialshares-ui/node_modules/next/dist/server/lib/cpu-profile.js:20:21
    at [onMessage] (node:inspector:99:11)
    at Connection.<anonymous> (node:inspector:69:56)
    at Session.post (node:inspector:141:28) {
  code: 'ERR_INSPECTOR_COMMAND'
}

I have Bitdefender, after adding my project folder to the exceptions it reduce the compile time from 2.6s to 163ms

@timneutkens You are amazing!!! Can’t thank you enough!

In my case, doing this:

experimental: {
	optimizePackageImports: ["@mantine/core", "@mantine/hooks"],
}

didn’t help. It broke the imports and I got a Mantine error.

I didn’t have Windows Defender, but I do have an antivirus: Bitdefender. I removed the working folder from the monitoring/analyzing thing that it does and now I can work again on this project! First execution time didn’t improve too much, but moving between links now goes lightning fast!

Again, thank you so much! I will take a closer look at the experimental thing also, maybe I can get it to work even faster. But for now I’m super happy with the result!

Have a great day!

@fcristel I had a look at your trace, it seems the majority of time spent is in @mantine/core and @mantine/hooks, probably because your filesystem is also very slow to read files looking at all other read-file spans being slow. Can you try configuring these to be optimized so that they don’t import thousands of modules:

experimental: {
	optimizePackageImports: ["@mantine/core", "@mantine/hooks"],
}

Docs: https://nextjs.org/docs/app/api-reference/next-config-js/optimizePackageImports

@ericnation would you mind sharing the link to the article you are referring to?

ubuntu@xxx:~/git/Bailo/frontend$ NEXT_CPU_PROF=1 npm run dev -- --turbo

> dev
> next dev --turbo

   ▲ Next.js 14.1.2-canary.7 (turbo)
   - Local:        http://localhost:3000

 ✓ Ready in 2.2s
^CCannot generate CPU profiling: Error [ERR_INSPECTOR_COMMAND]: Inspector error -32000: No recording profiles found
    at new NodeError (node:internal/errors:405:5)
    at [onMessage] (node:inspector:94:29)
    at Connection.<anonymous> (node:inspector:68:56)
    at Session.post (node:inspector:140:28)
    at process.saveProfile (/home/ubuntu/git/Bailo/frontend/node_modules/next/dist/server/lib/cpu-profile.js:12:17)
    at process.emit (node:events:526:35)
    at process.exit (node:internal/process/per_thread:192:15)
    at /home/ubuntu/git/Bailo/frontend/node_modules/next/dist/server/lib/cpu-profile.js:20:21
    at [onMessage] (node:inspector:98:11)
    at Connection.<anonymous> (node:inspector:68:56) {
  code: 'ERR_INSPECTOR_COMMAND'
}

Any ideas why it seems like I can’t run the CPU profiler? Running with Node v18.17.1, npm v9.6.7 and Next v14.1.2-canary.7. The same also occurs when I omit --turbo.

@ericnation every CSS module is taking 10+ seconds to compile, I’m assuming you have postcss config (i.e. tailwind) and custom tailwind config? Can you share them?

CleanShot 2024-03-02 at 20 36 00@2x

my problem fixed by deleting the lucide-react’s dynamic component

Can you submit a new issue detailing your exact issue using swcMinify: true so we can take a closer look?

@samcx I tried removing that line from my config to reproduce the old behavior and it didn’t immediately show the slow compile times. I’m thinking there was probably another issue that was resolved around the same time that I made the config change, but it could still be related. I’ll play around with it and if I can reliably reproduce I’ll create a new issue for it.

I’ve been struggling with extremely slow dev mode builds on version 14.0.2 but adding swcMinify: true to next.config.js made a huge difference. I haven’t experienced any regressions from this change, only the performance boost.

This is my full config for reference:

const nextConfig = {
  reactStrictMode: true,
  swcMinify: true,
  async headers() {
    return [
      {
        source: "/:path*",
        headers: advancedHeaders,
      },
    ];
  },
};

Same issue here, 13.5.6 works amazingly fast (2-5seconds), but upgrading to next 14 dev compilation takes around 80-90 seconds.

After banging my head against the wall for some time, can confirm downgrading from Next 14.0.3 back to Next 13.5.6 reduced the load times from ~100s back down to sub 5s on first compile for the worst page offenders, most pages compile in under 1s, thank god that subsequent renders don’t require recompilation.

Appreciate the efforts from the Next team on this and would also appreciate easier logging to identify any modules taking long time to compile. The .next/trace file apparently isn’t for human digestion.

Recently returned to nextjs to start a new project, the performance in dev is incredibly bad, no 100s for me, but 2-5s rebuilds make for awful DX out of the box, almost to extend of wanting to just run a separate builder

Here is the profiling for an upcoming product around trpc: https://gist.github.com/juliusmarminge/f0cbd590fa073ae409772d2a7a1331e7

I have had this issue for a while and finally found a fix for my usecase. I get from backend full style names for tailwind, and to do that you need to safelist the styles u want to use. I did something stupid, safelisted too much. I fixed this by being more concise about what I want to safelist.

I went from 17 sec reloads to 400 ms

By changing this:

safelist: [
    {
      pattern: /bg-/,
      variants: ["lg", "hover", "focus", "lg:hover"],
    },
    {
      pattern: /text/,
      variants: ["lg", "hover", "focus", "lg:hover"],
    },
]

To this:

 safelist: [
    {
      pattern: /bg-main/,
    },
    {
      pattern: /text-main/,
    },
]

I started facing this problem when migrating my old system to Next.

At first it was incredibly fast, even more so with the fast refresh. But after a while, configuring the monorepo, adding the UI package that used Material-UI, the application started to show a lot of slowness in route transitions in dev mode.

It took more than 1 minute for each new page accessed.

My structure worked as follows:

A monorepo containing an app/web and a package/ui. This package contained several settings and components coming from Material-UI.

So I started breaking down my application and redoing it part by part. I was able to considerably improve application loading.

Changes I made:

  • changed from turbo to webpack
  • I stopped using tsup in the package/ui and just included the files from that package in the tsconfig of the app/web, this way I imported the files directly, without using them in a packaged way.
  • Transformed all imports import { Button } from "mui-core" to import Button from "mui-core/Button"

I believe that what caused slowness on the development server was the resolution of dependencies in the package/ui. When using an external package in the monorepo, the server was resolving all dependencies of that package for each page, without need, as not all of the package was used on that page.

@timneutkens

Here is my debugging. Unfortunately, I was not able to run my application. I’ve provided the error that I received.

https://gist.github.com/kanafghan/20793c80bddf0898c72072fc5b12e9e5

I’m developing on a Windows 11 pro using WSL2 with Ubuntu 22.04. Our workspace is Nx based with more than 5k modules. After upgrading to Next.js 13.3.0, the DX has got extremely slow and we are requried to kill the dev-server often and restart in order to test. We are using both Storybook and TailwindCSS.

It seems we’re back to posting “it’s slow”, please follow the steps provided in the initial post of this issue to share the CPU profiles and traces. There is nothing we can do with “slow for me too” unfortunately.

@djaffer @mjyoung Please read the initial post in this issue. We can’t investigate/help with your particular case without the trace and cpu profiles.

Especially if you’re using material-ui you might just not know that you’re importing 11.000 (yes, eleven thousand) icons in your application. We use the trace to find what you’re doing in the application that makes it slow to compile, in the majority of cases that are now being posted (since 13.4.8) that is either you’re using TailwindCSS with a wrong content configuration which ends up scanning the filesystem (which is outside of what we can do in Next.js) or you’re importing libraries that were published in a way that causes you to import every possible file in that package (i.e. material-ui, react-icons etc.). For that case we added some default configs for libraries that were doing the most harm to compile times, including material-ui.

@HakkaOfDev I don’t see anything that stands out to me in the cpuprofile, looks like just a large amount of modules being parsed. Turbopack (when stable) will help a lot with that case.

Same here. Also, on top (which slows it even more down), hot refresh falls back to a full reload more and more often in big applications.

@timneutkens

Here are the gists:

I followed the guide as closely as possible. However, after reverting all my packages back to 13.4.7, I encountered an issue where I got 2 CPU profiles and couldn’t prevent it from starting twice.

I hope this helps. Please let me know if there’s anything else you need from me.

@Livog I’d love to get both 13.4.7 and 13.4.8 for your application as I’m curious why it was 2 minutes before.

No solution for React Icons then?

modularizeImports: {
  "react-icons": {
    transform: "react-icons/{{member}}",
  },
}

Just reading the docs, seems like this should work?

Doesn’t do a thing actually. Output in production still includes 5MB and complete package.

React Icons is not bundled correctly. You need to use a fork of it. I use @sukka/react-icons-all-files with this modularize config:

 "react-icons/?(((\\w*)?/?)*)": {
      transform: "@sukka/react-icons-all-files/{{ matches.[1] }}/{{ member }}",
      skipDefaultConversion: true,
 },

Sadly I think the latest work here https://github.com/vercel/next.js/pull/50900 (edit: actually it was done in https://github.com/vercel/next.js/pull/52031) about the support for default modularizeImports for common libraries broke our setup.

The error message is the following:

Module not found: Can't resolve 'antd/es/ConfigProvider'
> 1 | import { ConfigProvider as AntConfigProvider, theme } from "antd";

It seems the transformation makes it import from antd/es/ConfigProvider, but this one really does exist just in antd.

What’s even more worrying is that these modularizeImports are applied after the user defined modularizeImports, so if I’m not mistaken, there’s no way for me to override these?

Please let me know if there’s any workaround / fix and I’ll then share the trace files, thanks!

Just trying to contribute to people trying to get rid of slow development here: I was using next-auth and we wore using getServerSession a lot in our components. This was having a massive performance impact in dev mode, because the Session token was being created up to 70 times for each request. Removing this function and handling the session in a middleware had a great improve in both build and runtime during dev mode.

To say, in our experience, the app sometimes, randomly, becomes super slow, taking up to 40 seconds to load a simple page. So this is not a solution proposal, but just a tip to make things a little bit faster while this issue is not marked as completed.

@Nabwinsaud @AdrianKBL @yurimutti I’m assuming you didn’t see my earlier post that shows how you can help investigate the slowdowns in your application, can you follow the steps provided here to send us a CPU profile and trace file? Thanks in advance 🙏 https://github.com/vercel/next.js/issues/48748#issue-1680013792

@ctkc Can you provide the CPU profile too (using NEXT_CPU_PROFILE npm run dev (replace npm run dev with your specific dev command)

@tangzijun I’ve opened #52031 with the config for ant you shared.

@VanTanev @SuttonJack @kavinvalli please try the latest canary, if you’re able to I’d love to see the traces / CPU profile with that version 🙏

@DenisBessa We’ve added a default config for modularizeImports

@alexander-akait what’s the latest on disabling the schema validation? It’s a small win (50-100ms) but does affect all applications that have CSS currently.

@TheLarkInn Thanks, will do, we’re still investigating further on if all caching is being applied and such.

@Meriegg We’d like to investigate what you’re reporting, though that is separate from this issue, would you be able to create an issue with a reproduction (could be the application you were building) in order for us to look into it? Thanks 🙏

@Jacob-Daniel Are you able to provide an application we can run? That way we can investigate what happens 🙏 This doesn’t sound related to this particular issue so maybe you can create a new issue and link this one.

Same issue. wait compiling /accounts/page (client and server)… and does not open or takes too long.

image

image

same here, and in docker env is even worse, seems like is processing same files over and over without caching them.

Yes in development navigating to another route takes longer and makes my system slow too . navigation route is the main issue where the performance is getting slower

@mmahalwy these two is what we use:

docker run -d --name jaeger \
  -e COLLECTOR_ZIPKIN_HOST_PORT=:9411 \
  -p 5775:5775/udp \
  -p 6831:6831/udp \
  -p 6832:6832/udp \
  -p 5778:5778 \
  -p 16686:16686 \
  -p 14268:14268 \
  -p 14250:14250 \
  -p 9411:9411 \
  jaegertracing/all-in-one:1.25

Here is our trace: https://gist.github.com/VanTanev/05183d70c6e54aefc704cb424b3597f9

We’re seeing 11 seconds initial build, and 3-5 seconds first rebuild.

What I can see, dunno if it’s helping, The live reload is pretty quick. If I add a console.log(‘coucou’) on a page.tsx file, this is the output

But I decide to move to another page, where I didn’t go before my dev working session, or without refreshing my web page, it’s hudge, sometimes I have to wait 20 seconds …

Same experience as @JunkyDeLuxe on a M1 Pro but not using Docker.

Trace

Btw webpack lazy building cold is faster than turbopack 🙂 by far

Yeah same for me. I used to remote develop inside our k8s cluster but dev --turbo is super slow inside a container and causes my health check endpoint to sigkill it regularly.

The whole app router is super slow when containerized in Dev mode.

It works perfectly fine when I run both on my local machine and connect it via reverse proxy. This way it’s faster than the old setup (which was not significantly faster before) and takes advantage of preloading pages via next/link. I see inconsistencies in caching too where it’s a mix of instant navigation or long builds (around 3.5k modules for some things) around 2-10 sec.

Also there is this weird thing happening that a page compiles just fine and then later it grinds to a halt being stuck in waiting for compiling forever until the pod is crashed.

For Tailwind CSS users: I’ve been able to find a pretty large slowdown with Tailwind CSS in general when content is misconfigured. This is not Next.js specific and would cause problems in any application, but since a lot of the reports nowadays are around Tailwind taking a long time to process I thought it’d be worth sharing here.

Worth having a look at this thread: https://twitter.com/timneutkens/status/1783851267237781574.

CleanShot 2024-04-26 at 10 32 22@2x

@MarcusHSmith seems you’ve added postcss customization and it’s taking forever to run that.

@millsoft please see the initial post of this issue, it outlines exactly how you can provide information that is critical for us to do any type of investigation if you don’t want to share the codebase: https://github.com/vercel/next.js/issues/48748#issue-1680013792.

For some reason my trace.log is 209 MB in size. I don’t think it’s supposed to be that large.

That’s expected, it holds much more data than the .next/trace including memory usage information and all data on individual function calls (hence why it’s very helpful without needing all your code).

I’m getting slow hot reloads on just a default nextjs app router installation, using 14.2.0-canary.56, with or without turbopack.

Honestly, I am new to app router, but it is making me wonder whether I should be even considering it at this stage.

Gist can be found here: https://gist.github.com/iamgp/e40d0ce69d6a61de5bec3644bb3c4c4e

Alright finally now after upgrading to next@canary I’m seeing much much better loading times, DX is insanely more snappy again finally!! had to adjust tailwind, postcss files as there was some errors with them but other than that its pretty good, now my final concern is next@canary safe in production or should I only use that in dev then downgrade when I need to deploy?

I have the similar issue. I am using vanilla-extract. The initial compile time is around 250 seconds.

@timneutkens is canary safe enough to use for production environment?

@RemyJouni had a look at yours, there’s a bunch of cases of slow filesystem for you as well, surprising (yet unsurprising tbh) both you and @fcristel have Windows paths in the trace, maybe you have windows defender enabled? It blocks each file read slowing down file reads.

Notable ones:

│  │  │  │  └─ module framer-motion (framer-motion\dist\es\index.mjs + 287) 49 ms (self 13s) [read-resource 13s, next-swc-loader 3.4 ms]
 module rxjs (rxjs\dist\esm5\index.js + 222) 19 ms (self 11s) [read-resource 10s, next-swc-loader 2.3 ms]

Sorry to disturb you. After investigation, we found that the content setting of Tailwindcss is incorrect, which leads to slow compilation. Thank you for your answers to the performance issues!

Hi! I’m using tailwindcss too, what you mean with content setting of Tailwindcss is incorrect? thx!

@ericnation would you mind sharing the link to the article you are referring to?

Sure thing. https://nystudio107.com/blog/speeding-up-tailwind-css-builds

I’m having some difficulty trying to understand this huge latency difference between a custom infrastructure (docker + K8) and Vercel without a clear pattern. the files and the difference is still tiny.

Maybe request/limits in custom deployment is too low - https://medium.com/pipedrive-engineering/how-we-choked-our-kubernetes-nodejs-services-932acc8cc2be

Also, Vercel infrastructure is placed over CDN, even without cached app responses, latency can be lower because round-trip is shorter.

Thank you so much @SuperOleg39! That was the issue, my initial resources were too small for a NodeJS application. It is still slower than the Vercel infrastructure, but as you said, it is because of CDN. Very satisfied about the improvement! 🙏

Hi Tim,

Dropping our gist here for analysis. https://gist.github.com/ericnation/347b406f96f046dc3bce856cca8955b8

I’ve been working at trying to solve this project’s slowness for days now. Driving me a bit crazy at this point 😅 Appreciate your help 🙏

I’ll have a look into that, thanks @RemyJouni!

@RemyJouni were you running Turbopack with next@canary or on 14.1.0? If you weren’t on canary can you try with next@canary (npm install next@canary) to verify if we haven’t fixed an underlying issue already.

The problem is also present in the latest canary version.

image

@ricsands2801 can you be more specific?

  • What are you seeing locally? Is there a minimal reproduction of your issue? How many modules are being reloaded?

  • Have you tried next dev --turbo on the latest version? Was it faster? If it did not work, what was the error?

The compilation time for my landing page has significantly decreased from 19s to just 5s. Other pages that typically take around 5 seconds to compile are now taking less than one second which is great news.

However, I encountered an error related to Sanity.io, which is a bit strange since I’m following their documentation thoroughly and the project is working just fine.

 ⨯ TypeError: __TURBOPACK__imported__module__$5b$project$5d2f$node_modules$2f40$sanity$2f$client$2f$dist$2f$index$2e$cjs$2e$js__$5b$app$2d$ssr$5d$__$28$ecmascript$29$__.createClient is not a function
    at Module.r (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\node_modules_16650a._.js:19772:182)
    at Module.t (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\node_modules_16650a._.js:20553:278)
    at D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5624:202
    at [project]/src/app/lib/sanity.ts [app-ssr] (ecmascript) (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5642:3)
    at instantiateModule (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:488:23)
    at getOrInstantiateModuleFromParent (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:539:12)
    at esmImport (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:113:20)
    at D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5652:131
    at [project]/src/app/[locale]/blog/ArticleCard.tsx [app-ssr] (ecmascript) (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5724:3)
    at instantiateModule (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:488:23)
    at getOrInstantiateModuleFromParent (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:539:12)
    at esmImport (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:113:20)
    at D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5803:152
    at [project]/src/components/index.ts [app-ssr] (ecmascript) {module evaluation} (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5807:3)
    at instantiateModule (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:488:23)
    at getOrInstantiateModuleFromParent (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:539:12)
    at esmImport (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:113:20)
    at D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:5994:158
    at [project]/src/components/NavLogin.tsx [app-ssr] (ecmascript) (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\src_13c790._.js:6133:3)
    at instantiateModule (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:488:23)
    at getOrInstantiateModuleFromParent (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:539:12)
    at commonJsRequire (D:\People\Tarek\Gulf Picasso\gulf-picasso-avatar\.next\server\chunks\[turbopack]_runtime.js:127:20)

I am facing this issue a lot. Recently I migrated my considerably huge project to an app router from Pages and I see the Pages router is much faster than then app router. And yeah I am using the react-icons package for my icons. How do I overcome this? The changes in the files are also not reflecting so fast, sometime they are but sometimes the page simply shows a white background and to see the changes we have to refresh the page again. Each page compilation is taking around 9s to 10s. Are there any best practices to follow or any configuration is needed? I am using 14.1.1-canary.46 for checking but that is also slow

I have removed react-icons from the project and the compile time decreased from 8sec to 1-2sec[ varying ]. It might help people to achieve a bit of speed if there are some less icons in their project.

This issue is very easily reproducible using create-next-app@latest.

Regardless of the settings used when creating, I get extremely slow performance when running in Docker, it takes 20-30s to compile the “home” page of the demo app. The tailwind/dynamic router version takes 30s, the non-tailwind/non-dynamic router version takes 18 seconds. Running the same things locally only takes 1-2 seconds.

I am on Mac M1, with a docker that has no problem with other react/vite applications.

Heres is the dockerfile:

` FROM node:20

WORKDIR /usr/src

COPY package.json ./

RUN npm install

COPY . . EXPOSE 3000

CMD npm run dev `

Same issue here, 13.5.6 works amazingly fast (2-5seconds), but upgrading to next 14 dev compilation takes around 80-90 seconds.

After banging my head against the wall for some time, can confirm downgrading from Next 14.0.3 back to Next 13.5.6 reduced the load times from ~100s back down to sub 5s on first compile for the worst page offenders, most pages compile in under 1s, thank god that subsequent renders don’t require recompilation.

Appreciate the efforts from the Next team on this and would also appreciate easier logging to identify any modules taking long time to compile. The .next/trace file apparently isn’t for human digestion.

Yes, you are correct! I think ver 14.x has many problems with compiling code.

https://github.com/vercel/next.js/issues/58788 Hovering a Link component causes unnecessary useParams hook re-render. #58788

@timneutkens is anybody on your team still looking at the supplied trace files?

I’ve stumbled upon this issue recently and noticed that my dev server got suddenly slow and even exploding the memory usage until node broke.

After a day of looking into this I noticed this started ocurring after using npm link to point one of my dependencies to a local repo.

I’m running this via WSL on windows and suspect some problems with symlinking being tracked incorrectly.

Tested with both npm, pnpm and bun and both had the same issue of hanging for 70 ~ 90 seconds on dev builds.

I’m currently unable to use a locally linked dependency via npm link without this wait time.

If you wish for more info I can spend some time filtering the use case, maybe if useful I can setup some minimal reproduction repositories.

@jeremypress had a look at your CPU profile, it seems you have dd-trace in the process and that is adding massive overhead spawning sub-processes. I.e.:

CleanShot 2023-09-22 at 11 46 36@2x

Besides that not seeing anything particularly out of the ordinary, you’ll likely get a big improvement when turbopack is ready.

@juliusmarminge same for your case, it seems a formatter was run on the cpuprofile that makes it unable to be parsed, it also seems to be missing a significant amount of data.

oh no - i just did a CMD+A CMD+C on the .cpuprofile file that got generated, but yes vscode did do some parsing, but i didn’t expect that to be included in the copied text…

i’ll redo it!

@timneutkens Hey, thank you for the work on this issue. I tried updating to next@canary to see if it resolves the slowness but experiencing an issue with it not compiling my code at all. attached a file.

I updated to canary.32, and downgraded to canary.30, not working 🤷

Help?

image

https://github.com/FreeDrifter/next-demo/blob/main/package.json#L24

Repeat of earlier messages: Upgrade your application. 13.4.8 has big improvements but ideally upgrade to the latest version.

Hello @timneutkens I uploaded my trace files again, I updated to latest canary version but we are still experiencing dev server crashes and slow downs. Could you please take a look? Thanks

https://gist.github.com/Gr33nLight/750b3fee0cc02236fd7e357d52ff49cc

@timneutkens what would you like me to share?

Yesterday, I discovered that after eliminating a useEffect() in a component that was causing an infinite rerenders, the dev server behaved without any issues. So I’m no longer able to reproduce the issue described here.

I’ve encountered this issue when loading the app from the VSCode terminal on WSL2. However, when I run the app directly from the WSL2 terminal, it operates much faster.

i want to try and contribute, I hope it can help someone, sometimes what make nextjs slow in dev in my case is that if you add costum local font in using font face like these :

@font-face {
  font-family: 'Kanakira-BoldInktrap';
  src: url('/fonts/Kanakira/Kanakira-BoldInktrap.ttf') format('truetype');
}

well i do that because i don’t know why i can’t add it like in the nextjs docs like these :

// Font files can be colocated inside of `pages`
const kanakira = localFont({
  src: '../public/fonts/Kanakira/Kanakira-BoldInktrap.ttf',
  variable: '--font-Kanakira-BoldInktrap'
});

 return <main className={`${kanakira.className} >
 
fontFamily: {
kanakiraBold: ['var(--font-Kanakira-BoldInktrap)'],
}
 

instead what is working for me is just adding this :

function MyApp({ Component, pageProps }: AppProps) {
 return <main className={`${kanakira.className}}`}>
   <style jsx global>{`
 :root {
   --font-Kanakira-BoldInktrap: ${kanakira.style.fontFamily};

 }
`}</style>


   <Component {...pageProps} />
 </main>

}

anyway using local font is indeed necessary for nextjs to have a better performance.

it goes from 3gb ram to 1,9gb ram for me

@timneutkens , you mentioned you are using jaeger to visualize the trace. We would also be eager to learn how to do that. Have the jaeger ui locally setup, but not able to upload .next/trace

@vigneshka this might be too late for you but hopefully someone else finds it useful.

I was able to use Jaeger to visualise the trace by running the script found at canary/scripts/send-trace-to-jaeger, i.e cargo run project-name/.next/trace.

This sends each of the traces to Jaeger via Zipkin and should output the trace location with something like http://127.0.0.1:16686/trace/740938fb6baf7f87

Note that Jaeger must be running with a Zipkin port exposed, docker run --name jaeger -p 16686:16686 -p 9411:9411 -e COLLECTOR_ZIPKIN_HOST_PORT=9411 jaegertracing/all-in-one:latest

@kanafghan Try to update next?

Hello @timneutkens we are also experiencing huge slow-downs in development mode, mainly the dev server running out of memory with a message of "warn The server is running out of memory, restarting to free up memory.

  • error Error: socket hang up"

Here is our CPU profiles and next config https://gist.github.com/Gr33nLight/1fcf6dd06deb6a14235abd62f28da2b4

@timneutkens I can add you to the project if you want. I am currently experiencing serious slowness in development.

Just want to say @timneutkens the latest 13.4.8 made a huge difference in improving speed. Great work y’all!

@g12i looking at these there’s unfortunately nothing we can do without getting full source code access. Looking at the CPU profile it seems it’s caused by fetching many API routes in sequence which causes the compiler to stall until all routes are compiled.

@baristikir you’ll want to look at your tailwind config, it takes 1 minute and 46 seconds to run tailwind

CleanShot 2023-07-06 at 16 52 40@2x

@Livog looking at the cpuprofile the reason Babel is running is that you’ve customized the webpack configuration to run svgr/core, I’m assuming to import svgs and convert them to React components. Looking at the cpuprofile it seems to amount to something like 1-2 seconds on a cold compile. There’s not much we can do in this case as it’s custom configuration that is not part of Next.js.

@altechzilla can you open an issue with a reproduction then we can investigate that 👍 This issue is for tracking down the slowdowns.

Will do, thanks!

@timneutkens - Can we get heroicons added to the default modularizeImports list?

As tailwind is recommended these days, I imagine a lot of people will be using heroicons, as it’s made by the makers of tailwind!

https://heroicons.com/

@timneutkens

Here’s a gist of the trace files after trying again on latest canary (13.4.8-canary.13)

Regarding to this https://github.com/webpack/webpack/pull/17343, schema-utils gets two updates:

  1. weak compiled schema cache, tested locally - it speeds up dev builds well (but need more feedback)
  2. Ability to disable validation at all - API is ready (https://github.com/webpack/schema-utils/releases/tag/v4.2.0), we need only to add an option to webpack (I think I will do it soon - this week)

Saving edits causes Chrome to become unresponsive. I have to open new browser window to see changes. Terminal log outputs compiled successfully.

Tailwind wireframe project, no other dependencies. 13.4.7-canary.1 Linux OS Trace

Thanks

Using turbopack is a game changer, 10-30s compile times down to 0.5s!

For those who want to try it, edit package.json:

"scripts": {
   ...
    "dev": "next dev",
   ...
}

to

"scripts": {
    ...
    "dev": "next dev --turbo",
    ...
}

Thank you, this is really helpful, you save my day. In my case, it down to 1-2s. It is better rather than 15-30s!

@timneutkens this issue here has a very good repro. #51201 All seemed to start at 13.2.5-canary.26. That issue also relates to 2 others with the same conclusion.

Btw webpack lazy building cold is faster than turbopack 🙂 by far

Yes! I’m surprised this is not more prevalent as an issue atm; unless turbo will somehow fix all of this in 13.5 and they’re waiting to address it.

What configs do you have for the faster webpack builds? I’ve tried quite a bit and can’t lower my built time by much. I need a temporary fix for this ASAP 😦