tailwindcss: Tailwind CLI slow down / memory leak

What version of Tailwind CSS are you using?

v3.0.22

What build tool (or framework if it abstracts the build tool) are you using?

None

What version of Node.js are you using?

v17.0.1

What browser are you using?

N/A

What operating system are you using?

Windows 10

Reproduction URL

https://github.com/FallDownTheSystem/tailwind-cli-slowdown

Describe your issue

After saving a file in the root folder, that triggers a rebuild by the Tailwind CLI watcher, while the rebuild is still in progress, I think some kind of memory leak happens.

The reproduction requires a file to be saved very rapidly to showcase it, but on larger projects, it can happen naturally, as the build times are longer to begin with.

I’ll paste the reproduction steps and explanation I added to the README.md of the minimal reproduction demo here. I’ve also attached a video that showcases the behavior.

https://github.com/FallDownTheSystem/tailwind-cli-slowdown

  1. npm install
  2. npm run watch
  3. Spam save ./folder/noonroot.aspx or ./folder/nonroot2.aspx (On Windows you can hold down ctrl + save to rapidly save the file)
  4. Spam save ./root.aspx for a long while
  5. Try to spam save one of the nonroot.aspx files again

The CLI now gets “stuck” on adding the rebuild step to the promise chain faster than it can process then, making the chain longer and longer. Once you stop spamming save, the chain will unwind and all the rebuilds will complete. But now each time you attempt to save, the process allocates a larger chunk of memory than originally.

This is even more evident if you spam save the tailwind.config.js file. This takes even longer, and seems to reserve much more memory.

After a while, the memory will be released, but subsequent saves of the noonroot.aspx files will cause much larger chunks of memory to be allocated, and the built times have increased by an order of magnitude.

At the extreme, this will lead to an out-of-memory exception and the node process will crash.

This bug seems to only happen when you edit one of the files in the root folder, and is more evident on larger projects where the building times are longer to begin with, and thus the memory ‘leak’ becomes more apparent faster.

This is harder to reproduce, but from experience, I would argue that this memory ‘leak’ happens often when you attempt to save a file while the rebuild is still in process. In a larger project, my watcher node process will crash several times a day due to out-of-memory exceptions.

The repository also includes a modified-cli.js that I used for debugging purposes. The modified Tailwind CLI has the addition of logging when the watcher runs the on change handler, and when the promise chain is increased or decreased.

https://user-images.githubusercontent.com/8807171/153777233-54acb464-d31f-4cab-8163-5f035060b85a.mp4

What cannot be seen on the video is the memory usage, which at its peak got up to 4 GB.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 3
  • Comments: 30 (5 by maintainers)

Most upvoted comments

After running tailwindcss in watch mode for a while, it slowed down and I got memory allocation failure:

<--- Last few GCs --->

[29476:0000024319ED0720] 28911888 ms: Scavenge 4035.7 (4124.6) -> 4031.3 (4141.6) MB, 15.0 / 0.0 ms  (average mu = 0.799, current mu = 0.414) allocation failure
[29476:0000024319ED0720] 28916345 ms: Mark-sweep 4045.2 (4141.6) -> 4033.7 (4146.4) MB, 4417.1 / 8.3 ms  (average mu = 0.541, current mu = 0.062) allocation failure scavenge might not succeed     


<--- JS stacktrace --->

FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 00007FF762A779CF v8::internal::CodeObjectRegistry::~CodeObjectRegistry+114207
 2: 00007FF762A06096 DSA_meth_get_flags+65542
 3: 00007FF762A06F4D node::OnFatalError+301
 4: 00007FF76333B1EE v8::Isolate::ReportExternalAllocationLimitReached+94
 5: 00007FF7633257CD v8::SharedArrayBuffer::Externalize+781
 6: 00007FF7631C8B9C v8::internal::Heap::EphemeronKeyWriteBarrierFromCode+1468
 7: 00007FF7631C5CB4 v8::internal::Heap::CollectGarbage+4244
 8: 00007FF7631C3630 v8::internal::Heap::AllocateExternalBackingStore+2000
 9: 00007FF7631E81B6 v8::internal::Factory::NewFillerObject+214
10: 00007FF762F1A685 v8::internal::DateCache::Weekday+1797
11: 00007FF7633C8EE1 v8::internal::SetupIsolateDelegate::SetupHeap+494417
12: 00007FF7633C9DE9 v8::internal::SetupIsolateDelegate::SetupHeap+498265
13: 000002431BCBD93E

We also run the CLI in watch mode on Windows and notice it occasionally running out of memory and crashing Node. It’s infrequent enough that I haven’t bothered to report it previously, so perhaps the problem is more widespread than might be inferred from the number of GH issues.

We also see a gradual increase in duplicate content output to the .css file over many compiles which we clean-up by stopping the CLI and restarting it - this forces a clean build. Clearly there is some kind of state that can hang-around in the CLI between compiles in some circumstances. Unfortunately it’s completely impractical for me to provide a repro URL for this.

Turns out we were essentially doubling the rule cache (not quite but close enough) instead of just adding the few entries to it that needed to be added. This can result in a significant slow down in fairly specific situations.

I’m hoping this has fixed a good portion of the issue here. Can some of you please give our insiders build a test and see if it helps out at all? I’m hopeful it’ll have some positive impact but if it isn’t sufficient I’ll reopen this issue.

Thanks for all the help and bearing with us on this one. ✨

I use tailwind in a SvelteKit setup and noticed a similar memory leak. After dumping the heap and looking at it, you can see a lot of older instances of the compilation still hanging around (I ran global.gc() before to make sure). MacOS 12.3 (Intel) / node 16.14.1 and tailwindcss 3.0.23:

Screen Shot 2022-03-17 at 3 54 49 PM

Thanks for looking into this. I think not scanning the node_modules folder is solid advice 😄

And yes, the memory usage does seem to go back down, but not all the way. I just tested again and after spamming a while it stays around 300-400 MB, and a single save after that spikes the memory usage to 1.6GB, and the build takes 750ms, though I’d argue this is actually longer, 750ms is only for the measured part. For reference, originally the node process takes 50-70 MB and a build takes 30ms.

Here’s a video of a single save after having spammed the saves for a while. You can see it start at 400 MB, spikes to 1.6 GB but after it comes down it goes to 460MB and stays there.

https://user-images.githubusercontent.com/8807171/153954099-4c56026f-123d-4bb1-a0b4-66b915588a9d.mp4

I also think (but am not sure) that these kinds of memory leaks or degradations in performance can happen even when you don’t do the ridiculous spamming, or have it scan folders with tens of thousands of files.

I wasn’t able to figure out where the memory usage is actually increasing, it’s possible it’s in one of the libraries the CLI calls.