jest: Memory Leak on ridiculously simple repo

You guys do an awesome job and we all appreciate it! 🎉

🐛 Bug Report

On a work project we discovered a memory leak choking our CI machines. Going down the rabbit hole, I was able to recreate the memory leak using Jest alone.

Running many test files causes a memory leak. I created a stupid simple repo with only Jest installed and 40 tautological test files.

jest-memory-leak

I tried a number of solutions from https://github.com/facebook/jest/issues/7311 but to no avail. I couldn’t find any solutions in the other memory related issues, and this seems like the most trivial repro I could find.

Workaround 😢

We run tests with --expose-gc flag and adding this to each test file:

afterAll(() => {
  global.gc && global.gc()
})

To Reproduce

Steps to reproduce the behavior:

git clone git@github.com:javinor/jest-memory-leak.git
cd jest-memory-leak
npm i
npm t

Expected behavior

Each test file should take the same amount of memory (give or take)

Link to repl or repo (highly encouraged)

https://github.com/javinor/jest-memory-leak

Run npx envinfo --preset jest

Paste the results here:

System:
    OS: macOS High Sierra 10.13.6
    CPU: (4) x64 Intel(R) Core(TM) i7-5557U CPU @ 3.10GHz
  Binaries:
    Node: 10.15.0 - ~/.nvm/versions/node/v10.15.0/bin/node
    Yarn: 1.12.3 - /usr/local/bin/yarn
    npm: 6.4.1 - ~/.nvm/versions/node/v10.15.0/bin/npm
  npmPackages:
    jest: ^24.1.0 => 24.1.0

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 181
  • Comments: 112 (30 by maintainers)

Commits related to this issue

Most upvoted comments

I’ve run some tests considering various configurations. Hope it helps someone.

node version node args jest args custom behavior time (seconds) heap (mb)
16.10 –expose-gc --no-compilation-cache –maxWorkers 1 afterAll(global.gc) + force options.serial to false on jest-runner 303 45
16.18 –expose-gc --no-compilation-cache –maxWorkers 1 afterAll(global.gc) + force options.serial to false on jest-runner 325 47
16.10 –expose-gc --no-compilation-cache –maxWorkers 2 - 236 64
16.18 –expose-gc --no-compilation-cache –maxWorkers 2 - 167 67
16.10 –expose-gc –maxWorkers 1 afterAll(global.gc) + force options.serial to false on jest-runner 234 82
16.10 –expose-gc –maxWorkers 2 - 155 96
16.10 –expose-gc --no-compilation-cache –runInBand --detectLeaks afterAll(global.gc) 313 159
16.10 –expose-gc --no-compilation-cache –runInBand --detectLeaks - 307 160
16.10 –expose-gc --no-compilation-cache –runInBand - 313 160
16.10 –expose-gc --no-compilation-cache –maxWorkers 1 - 333 160
16.10 –expose-gc --no-compilation-cache –runInBand --detectLeaks afterEach(global.gc) 397 160
16.18 –expose-gc --no-compilation-cache –runInBand --detectLeaks afterAll(global.gc) 281 164
16.18 –expose-gc --no-compilation-cache –runInBand --detectLeaks afterEach(global.gc) 298 164
16.18 –expose-gc --no-compilation-cache –maxWorkers 1 - 287 165
16.18 –expose-gc --no-compilation-cache –runInBand --detectLeaks - 300 165
16.18 –expose-gc --no-compilation-cache –runInBand - 337 165
16.10 –expose-gc –runInBand --detectLeaks - 258 199
16.10 –expose-gc –runInBand - 247 201
16.10 –expose-gc –maxWorkers 2 - 286 201
16.10 –expose-gc –runInBand --detectLeaks afterAll(global.gc) 256 202
16.10 –expose-gc –runInBand --detectLeaks afterEach(global.gc) 309 206
16.10 –runInBand - 261 629
16.18 –expose-gc –maxWorkers 2 - 277 899
16.18 –no-compilation-cache –runInBand - 297 907
16.18 –runInBand - 281 1055
16.18 –expose-gc –runInBand - 347 1262
16.18 –expose-gc –maxWorkers 1 afterAll(global.gc) + force options.serial to false on jest-runner 337 1380
Test Suites: 3 skipped, 31 passed, 31 of 34 total
Tests:       20 skipped, 49 todo, 171 passed, 240 total
Snapshots:   0 total

* Running with Jest 29.2.2 on a bitbucket pipeline container using node official docker images

Similar here, jest + ts-jest, simple tests get over 1GB of memory and eventually crash.

Quick Recap

  • Good - Running tests with jest@23 works as expected - heap size oscillates but goes down to original - it appears like the GC is succeeding to collect all memory allocated during the tests
  • Bad - Running tests with jest@24+ including jest@26.6.3 - heap size continues to grow over time, oscillating, but doesn’t seem to go down to the initial size - My assumption is that there’s a memory leak preventing GC for freeing up all the memory
  • I took the screenshots after running @jaredjj3’s example repo - see this comment)

@SimenB help? 🙏

Running with jest@23

Screen Shot 2020-11-23 at 13 44 50

Running with jest@26

Screen Shot 2020-11-23 at 13 44 33

Jest 25.1.0 has the same memory leak issue.

Here are my findings.

Did a study on 4 of our apps, made a benchmark with the following commands.

Case Command
A NODE_ENV=ci node node_modules/.bin/jest --coverage --ci --runInBand --logHeapUsage
B NODE_ENV=ci node --expose-gc node_modules/.bin/jest --coverage --ci --runInBand --logHeapUsage
C NODE_ENV=ci node --expose-gc ./node_modules/.bin/jest --logHeapUsage
D NODE_ENV=ci node node_modules/.bin/jest --coverage --ci --logHeapUsage

NB: “order” is the rank of the test within the running of the command e.g. 1 means it has been ran first, it’s just the order in which the console outputs the test result at the end.

EDIT: All of this is running on my local machine, trying this on the pipeline was even more instructive since only the case where there’s GC and no RIB results in 100% PASS. Also GC makes it twice as fast, imagine if you had to pay for memory usage on servers.

EDIT 2: case C has no --coverage --ci option but it does not impact performance. I added a chart to measure average pipeline speed with above scenarios. The graph is the average time of tests job on pipeline, 3 execution for each case, regardless of test outcome (All Pass vs some failing tests, because at the moment of collecting data some tests were unstable).

Cross apps Max Heap image

Average Heap image

App1 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

App2 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

App3 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

App4 Max Heap and Average Heap image

Heap Chronology image

File Based Heap (x axis is file path) image

Rank of appearance of the highest heap image

App 1 Average time of execution for the test job out of 3 pipelines for App1. image

Hope this helps or re-kindles the debate.

Hey guys, my team also encounter this issue, and we would like to share our solution.

Firstly, we need to understand that Node.js will find its best timing to swipe unused memory by its own garbage collection algorithm. If we don’t configure it, Node.js will do their own way.

And we have several ways to configure / limit how garbage collection works.

  • --expose-gc flag :If we add this flag while running node.js, there will be a global function called gc being exposed. If we call global.gc(), node.js will swipe all known unused memory.

  • --max-old-space-size=xxx : If we add this flag while running node.js. We are asking node.js to swipe all known unused memory if unused memory has reach xxx MB.

Secondly, I think we have 2 types of memory leak issue.

  • Type 1: Node.js knows that there are some unused memory, but he think that doesn’t matter his process, so he didn’t swipe unused memory. However, he ran out of all memory on his container / device.

  • Type 2: Node.js doest NOT know that there exists some unused memory. Even he swipe all unused memory, he still consumed too much memory and finally run out all memory.

For Type 1, it’s easier to solve. We can use --expose-gc flag and run global.gc() in each test to swipe unused memory. Or, we can add --max-old-space-size=xxx to remind node.js to swipe all known unused memory once it reached limit.

Before adding --max-old-space-size=1024:

 PASS  src/xx/x1.test.js (118 MB heap size)
 PASS  src/xx/x2.test.js (140 MB heap size)
 ...
 PASS  src/xx/x30.test.js (1736 MB heap size)
 PASS  src/xx/x31.test.js (1746 MB heap size)
...

After adding --max-old-space-size=1024:

 PASS  src/xx/x1.test.js (118 MB heap size)
 PASS  src/xx/x2.test.js (140 MB heap size)
 ...
 PASS  src/xx/x20.test.js (893 MB heap size)
 PASS  src/xx/x21.test.js (916 MB heap size)
 
// -> (everytime it reachs 1024MB, it will swipe ununsed memory)
 
 PASS  src/xx/x22.test.js (382 MB heap size)
 ...

(Note: if we specify lower size, it will of course use less unused memory, but more frequent to swipe it)

For Type 2, we may need to investigate where memory leaks happened. This would be more difficult.

Because in our team, main cause is Type I issue, so our solution is adding --max-old-space-size=1024 to Node.js while running tests


Finally, I would like to explain why --expose-gc works in previous comment.

Because in Jest source code, if we add --logHeapUsage to Jest, Jest will call global.gc() if gc exists. In other words, if we add --logHeapUsage to Jest and add --expose-gc to Node.js, in current version of Jest, it will force Node.js to swipe all known unused memory for each run of test.

However, I don’t really think adding --logHeapUsage and --expose-gc to solve this issue is a good solution. Because it’s more like we “accidentally” solve it.


Note: --runInBand: ask Jest to run all tests sequentially. By default, Jest will run tests in parallel by several workers. --logHeapUsage: log heap memory usage in each test.

Did a heap snapshot for my test suite and noticed that the majority of the memory was being used by strings that store entire source files, often the same string multiple times!

Screenshot 2021-06-23 at 1 33 30 AM

The size of these strings continue to grow as jest scans and compiles more source files. Does it appear that jest or babel holds onto the references for the source files (for caching maybe) and never clears them?

Did a heap snapshot for my test suite and noticed that the majority of the memory was being used by strings that store entire source files, often the same string multiple times!

Screenshot 2021-06-23 at 1 33 30 AM

The size of these strings continue to grow as jest scans and compiles more source files. Does it appear that jest or babel holds onto the references for the source files (for caching maybe) and never clears them?

+1, we are also running into this issue. Jest is using over 5GB per worker for us. Our heap snapshots show the same thing as above. Any updates would be greatly appreciated.

TLDR - The memory leak is not present in version 22.4.4, but starts appearing in the subsequent version 23.0.0-alpha.1. The following steps are for (1) the community to assert/refute this and then (2) find the offending commit(s) causing the memory leak and increased memory usage.

In https://github.com/facebook/jest/issues/7874#issuecomment-639874717, I mentioned that I created a repo jest-memory-leak-demo to make this issue easier to reproduce in local and Docker environments.

I took it a step further and decided to find the version that the memory leak started to show. I did this by listing all the versions returned from yarn info jest. Next, I manually performed a binary search to find the versions where version i does not produce a memory leak and version i + 1 does produce a memory leak.

Here is my env info (I purposely excluded the npmPackages since the version was the variable in my experiment):

npx envinfo --preset jest

  System:
    OS: macOS 12.0.1
    CPU: (10) arm64 Apple M1 Max
  Binaries:
    Node: 17.1.0 - ~/.nvm/versions/node/v17.1.0/bin/node
    Yarn: 1.22.17 - ~/.nvm/versions/node/v17.1.0/bin/yarn
    npm: 8.1.2 - ~/.nvm/versions/node/v17.1.0/bin/npm

The key finding is in jest-memory-leak-demo/versions.txt#L165-L169. You can see several iterations of the binary search throughout the file. I did one commit per iteration, so you also can examine the commit history starting at edc0567ad4710ba1be2bf2f745a7d5d87242afc4.

The following steps are for the community to validate these initial findings and ultimately use the same approach to find the offending commit causing the memory leaks. Here’s a StackOverflow post that will help: “How to get the nth commit since the first commit?”.

It would also be great if someone can write a script to do this in jest-memory-leak-demo. ~The most challenging part of doing this is programming memory leak detection~ edit: The script doesn’t have to decide whether or not a test run yields a memory leak or not - it can take a commit range and produce test run stats at each commit. A list of versions can be found by running yarn info jest. I don’t have time to do this at the moment.


NOTE: I was not very scientific about defining what versions produce a memory leak and what versions don’t. First of all, I should have used the yarn docker test command to reproduce the results on other machines, but I just wanted to get an answer as fast as possible. Second, for each version, I should have run the test command >30 times and then aggregated the results. If you decide to reproduce this in the way I did it, YMMV.

NOTE: For earlier versions, I had to add the following to my package.json:

"jest": {
 "testURL": "http://localhost/"
}

If I didn’t, I got the following error:

SecurityError: localStorage is not available for opaque origins

At first, I was diligent in removing this if it was not needed, but then I got lazy after iteration eight or so and just kept it. I don’t know if this affected the results.

Hi,

We experiments the same issue about memory leaks with our Angular 13 app We try the --detect-leaks as suggested above and it seems to work but only with node 14 (14.19.1) npx jest-heap-graph "ng test --run-in-band --log-heap-usage --detect-leaks"

Here the heap graph for 14.19.1 :

--- n: 127 ---
     287.00 ┤                  ╭╮                  ╭╮╭╮           ╭╮
     285.00 ┤      ╭╮ ╭╮ ╭╮    ││╭╮ ╭╮  ╭╮      ╭╮ ││││      ╭╮  ╭╯│      ╭╮  ╭╮      ╭╮
     283.00 ┤      ││ │╰╮││    ││││ │╰╮╭╯│  ╭╮ ╭╯│ ││││  ╭╮  ││╭╮│ ╰╮╭╮   ││ ╭╯│ ╭╮ ╭╮││        ╭╮
     281.00 ┼╮ ╭─╮ ││ │ │││  ╭╮││││ │ ││ │  ││ │ │ │╰╯╰╮╭╯│  │││││  │││  ╭╯│╭╯ │ ││ ││││   ╭╮   ││
     279.00 ┤│ │ │ ││ │ │││  │╰╯│││╭╯ ╰╯ ╰╮ │╰╮│ │ │   ╰╯ ╰╮ │╰╯╰╯  ││╰╮ │ ││  ╰╮││ ││││ ╭╮││╭╮ ││
     277.00 ┤│ │ │ ││ │ │││╭╮│  ││││      │ │ ││ │╭╯       │ │      ╰╯ ╰─╯ ╰╯   ╰╯│╭╯││╰─╯││╰╯╰─╯╰────
     275.00 ┤│ │ │╭╯╰╮│ ││││││  ││││      ╰╮│ ││ ╰╯        │ │                    ╰╯ ╰╯   ╰╯
     273.00 ┤│╭╯ ││  ││ ╰╯││││  ││╰╯       ╰╯ ││           │ │
     271.00 ┤││  ││  ││   ││││  ││            ││           │╭╯
     269.00 ┤││  ╰╯  ╰╯   ││╰╯  ╰╯            ╰╯           ││
     267.00 ┤╰╯           ╰╯                               ╰╯

Here the heap graph for 16.14.2 :

--- n: 126 ---
    2212.00 ┤                                                                                   ╭─────
    2074.50 ┤                                                                          ╭────────╯
    1937.00 ┤                                                                ╭─────────╯
    1799.50 ┤                                                        ╭───────╯
    1662.00 ┤                                               ╭────────╯
    1524.50 ┤                                      ╭────────╯
    1387.00 ┤                             ╭────────╯
    1249.50 ┤                    ╭────────╯
    1112.00 ┤            ╭───────╯
     974.50 ┤   ╭────────╯
     837.00 ┼───╯
npx envinfo --preset jest

  System:
    OS: Windows 10 10.0.19044
    CPU: (12) x64 11th Gen Intel(R) Core(TM) i5-11500H @ 2.90GHz
  Binaries:
    Node: 16.14.2 - C:\Program Files\nodejs\node.EXE
    Yarn: 1.22.18 - ~\workspace\WEBTI\fe-webti\node_modules\.bin\yarn.CMD
    npm: 8.5.0 - C:\Program Files\nodejs\npm.CMD
  npmPackages:
    jest: ^27.3.1 => 27.5.1 

We also tried the coverageProvider set to babel as here https://github.com/facebook/jest/issues/11956#issuecomment-1112561068 no change We guess our tests may are not perfectly well written but there still leaks

EDIT

Downgrade to node 16.10.0 seems to work

--- n: 127 ---
     413.00 ┤            ╭╮
     409.70 ┤            ││
     406.40 ┤            ││
     403.10 ┤            ││                                                                          ╭
     399.80 ┤            ││╭╮            ╭╮               ╭╮         ╭╮ ╭╮                           │
     396.50 ┤            ││││    ╭╮      ││       ╭──╮ ╭╮╭╯╰─╮ ╭──╮ ╭╯│ ││ ╭╮╭╮         ╭╮        ╭╮╭╯
     393.20 ┤╭╮╭╮        ││││    ││╭─╮ ╭╮││╭╮ ╭╮╭╮│  │╭╯││   │ │  ╰─╯ ╰─╯╰╮│││╰───╮╭──╮╭╯╰─╮╭─╮╭──╯╰╯
     389.90 ┤││││╭╮     ╭╯││╰╮ ╭─╯││ ╰─╯╰╯│││ │││╰╯  ││ ╰╯   │ │          ╰╯╰╯    ╰╯  ╰╯   ╰╯ ╰╯
     386.60 ┤│╰╯╰╯│  ╭╮╭╯ ╰╯ │╭╯  ╰╯      ╰╯╰╮│││    ╰╯      ╰─╯
     383.30 ┼╯    ╰╮ │││     ╰╯              ╰╯╰╯
     380.00 ┤      ╰─╯╰╯

jest-memory-leak-demo

I’ve been experiencing memory leaks due to this library and it has made it unusable on one of the projects I’m working on. I’ve reproduced this in jest-memory-leak-demo, which only has jest as a dependency. I’ve reproduced this on macOS and within a Docker container using the node:14.3.0 image.

npx envinfo --preset jest

  System:
    OS: macOS 10.15.5
    CPU: (8) x64 Intel(R) Core(TM) i7-8569U CPU @ 2.80GHz
  Binaries:
    Node: 14.3.0 - /usr/local/bin/node
    Yarn: 1.22.4 - ~/.yarn/bin/yarn
    npm: 6.13.7 - ~/.npm-global/bin/npm
  npmPackages:
    jest: ^26.0.1 => 26.0.1

In jest-memory-leak-demo, there are 50 test files with the following script:

it('asserts 1 is 1', () => {
  for (let i = 0; i < 1000; i++) {
    expect(1).toBe(1);
  }
});

Running a test yields 348 MB and 216 MB heap sizes in macOS and Docker, respectively.

However, when I run with node’s gc exposed:

node --expose-gc ./node_modules/.bin/jest --logHeapUsage --runInBand

it yields 38 MB and 36 MB heap sizes in macOS and Docker, respectively.

Looking into the code, I see that jest-leak-detector is conditionally constructed based on the config. So if I don’t run jest with --detectLeaks, I expect exposing the gc to have no effect. I searched jest’s dependencies to see if any package is abusing the gc, but I could not find any.

Is there any progress on this issue?

It’s great that some folks think the memory leak issue is somehow not a big deal anymore, but we’re at jest 27 and we have to run our builds at Node 14 even though we will ship with Node 16 so that our test suite can finish without running out of memory. Even at Node 14, as our test suite has grown, we struggle to get our test suite to run to completion.

This works for me: https://github.com/kulshekhar/ts-jest/issues/1967#issuecomment-834090822

Add this to jest.config.js

globals: {
    'ts-jest': {
      isolatedModules: true
    }
  }

Something that might be helpful for those debugging a seeming memory leak in you Jests tests:

Node’s default memory limit applies seperately to each worker, so make sure that the total memory available > number of workers * the memory limit.

When the combined memory limit of all of the workers is greater than the available memory, Node will not realized that it needs to run GC, and the memory usage will climb until it OOM’s.

Setting the memory limit correctly causes Node to run GC much more often.

For us, the effect was dramatic. When we had the --max-old-space-size=4096 and three workers on a 6GB machine, memory usage increased to over 3gb per worker and eventually OOM’d. Once we set it to 2gb, memory usage stayed below 1gb per worker, and the OOM’s went away.

crashes for us too

strangely, running node --expose-gc ./node_modules/.bin/jest --runInBand --logHeapUsage “fixes” the issue but running it with npx jest --runInBand --logHeapUsage or ./node_modules/.bin/jest --runInBand --logHeapUsage produces a memory leak

image

image

As @UnleashSpirit mentioned, downgrading to node 16.10 fixed the memory issue with Jest.

When I --logHeapUsage --runInBand on my test suite with about 1500+ tests, the memory keeps climbing from ~100MB to ~600MB. The growth seems to be in these strings, and array of such strings (source).

It is apparent that the number of compiled files will grow as jest moves further in the test suite — but if this doesn’t get GC’ed, I don’t have a way to separate real memory leaks from increases due to more modules stored in memory.

On running a reduced version my test suite (only about ~5 tests), I was able to narrow down on this behaviour —

Let’s say we had — TEST A, TEST B, TEST C, TEST D

I was observing TEST B. Without doing anything —

TEST A (135 MB heap size)
TEST B (150 MB heap size)
TEST C (155 MB heap size)
TEST D (157 MB heap size)

If I reduce the number of imports TEST B is making, and replace them with stubs —

TEST A (135 MB heap size)
TEST B (130 MB heap size). <-- Memory falls!
TEST C (140 MB heap size)
TEST D (147 MB heap size)

This consistently reduced memory across runs. Also, the imports themselves did not seem to have any obvious leaks (the fall in memory corresponded with the number of imports I commented out).

Other observations:

  • Disabling babel source maps reduces overall memory usage
  • JS transformers can have a noticeable impact on memory depending on how they’re written.

Do I understand correctly that using the workaround to force GC runs makes the heap size remain constant? In that case it’s not really a memory leak, just v8 deciding not to run the GC because there is enough memory available. If I try running the repro with 50MB heap size

node --max_old_space_size=50 node_modules/.bin/jest --logHeapUsage --runInBand --config=jest.config.js

the tests still complete successfully, supporting this assumption.

One thing I came over when going through the list of issues was this comment: https://github.com/facebook/jest/issues/7311#issuecomment-578729020, i.e. manually running GC in Jest.

So I tried out with this quick and dirty diff locally:

diff --git i/packages/jest-leak-detector/src/index.ts w/packages/jest-leak-detector/src/index.ts
index 0ec0280104..6500ad067f 100644
--- i/packages/jest-leak-detector/src/index.ts
+++ w/packages/jest-leak-detector/src/index.ts
@@ -50,7 +50,7 @@ export default class LeakDetector {
   }
 
   async isLeaking(): Promise<boolean> {
-    this._runGarbageCollector();
+    runGarbageCollector();
 
     // wait some ticks to allow GC to run properly, see https://github.com/nodejs/node/issues/34636#issuecomment-669366235
     for (let i = 0; i < 10; i++) {
@@ -59,18 +59,18 @@ export default class LeakDetector {
 
     return this._isReferenceBeingHeld;
   }
+}
 
-  private _runGarbageCollector() {
-    // @ts-expect-error
-    const isGarbageCollectorHidden = globalThis.gc == null;
+export function runGarbageCollector(): void {
+  // @ts-expect-error
+  const isGarbageCollectorHidden = globalThis.gc == null;
 
-    // GC is usually hidden, so we have to expose it before running.
-    setFlagsFromString('--expose-gc');
-    runInNewContext('gc')();
+  // GC is usually hidden, so we have to expose it before running.
+  setFlagsFromString('--expose-gc');
+  runInNewContext('gc')();
 
-    // The GC was not initially exposed, so let's hide it again.
-    if (isGarbageCollectorHidden) {
-      setFlagsFromString('--no-expose-gc');
-    }
+  // The GC was not initially exposed, so let's hide it again.
+  if (isGarbageCollectorHidden) {
+    setFlagsFromString('--no-expose-gc');
   }
 }
diff --git i/packages/jest-runner/src/runTest.ts w/packages/jest-runner/src/runTest.ts
index dfa50645bf..5e45f06b1b 100644
--- i/packages/jest-runner/src/runTest.ts
+++ w/packages/jest-runner/src/runTest.ts
@@ -22,7 +22,7 @@ import type {TestFileEvent, TestResult} from '@jest/test-result';
 import {createScriptTransformer} from '@jest/transform';
 import type {Config} from '@jest/types';
 import * as docblock from 'jest-docblock';
-import LeakDetector from 'jest-leak-detector';
+import LeakDetector, {runGarbageCollector} from 'jest-leak-detector';
 import {formatExecError} from 'jest-message-util';
 import Resolver, {resolveTestEnvironment} from 'jest-resolve';
 import type RuntimeClass from 'jest-runtime';
@@ -382,6 +382,11 @@ export default async function runTest(
     // Resolve leak detector, outside the "runTestInternal" closure.
     result.leaks = await leakDetector.isLeaking();
   } else {
+    if (process.env.DO_IT) {
+      // Run GC even if leak detector is disabled
+      runGarbageCollector();
+    }
+
     result.leaks = false;
   }

So if running after every test file, this gives about a 10% perf degradation for jest pretty-format in this repo.

$ hyperfine 'node packages/jest/bin/jest.js pretty-format' 'node packages/jest/bin/jest.js pretty-format -i' 'DO_IT=yes node packages/jest/bin/jest.js pretty-format' 'DO_IT=yes node packages/jest/bin/jest.js pretty-format -i'
Benchmark 1: node packages/jest/bin/jest.js pretty-format
  Time (mean ± σ):      2.391 s ±  0.088 s    [User: 2.418 s, System: 0.392 s]
  Range (min … max):    2.273 s …  2.574 s    10 runs

Benchmark 2: node packages/jest/bin/jest.js pretty-format -i
  Time (mean ± σ):      2.315 s ±  0.060 s    [User: 2.381 s, System: 0.385 s]
  Range (min … max):    2.229 s …  2.416 s    10 runs

Benchmark 3: DO_IT=yes node packages/jest/bin/jest.js pretty-format
  Time (mean ± σ):      2.513 s ±  0.101 s    [User: 2.966 s, System: 0.397 s]
  Range (min … max):    2.413 s …  2.746 s    10 runs

Benchmark 4: DO_IT=yes node packages/jest/bin/jest.js pretty-format -i
  Time (mean ± σ):      2.581 s ±  0.179 s    [User: 2.981 s, System: 0.403 s]
  Range (min … max):    2.423 s …  3.032 s    10 runs

Summary
  'node packages/jest/bin/jest.js pretty-format -i' ran
    1.03 ± 0.05 times faster than 'node packages/jest/bin/jest.js pretty-format'
    1.09 ± 0.05 times faster than 'DO_IT=yes node packages/jest/bin/jest.js pretty-format'
    1.11 ± 0.08 times faster than 'DO_IT=yes node packages/jest/bin/jest.js pretty-format -i'

However, this also stabilizes memory usage in the same way --detect-leaks does.

So it might be worth playing with this (e.g. after every 5 test files instead of every single one?). Thoughts? One option is to support a CLI flag for this, but that sorta sucks as well.


I’ll reopen (didn’t take long!) since I’m closing most other issues and pointing back here 🙂 But it might be better to disucss this in an entirely new issue. 🤔

This is going to sound bad but I have been struggling with the same situation as @pastelsky - memory heap dumps showing huge allocation differences in array and string between each snapshot and memory not being released after test run is completed.

We have been running Jest from inside Node with jest.runCLI, I tried everything suggested in this topic and in other issues on GitHub:

  • upgrade to newest Jest (from 26.6.3 to 27 latest)
  • running with --gc-expose + logHeapUsage
  • running in band
  • running with 1 worker
  • setting max-old-space-size
  • More CPU
  • More RAM - this of course worked and tests finished but memory once again never been released back

The only thing that reduced memory by around 200MB was to switch off default babel-jest transformer since we did not need it at all:

testEnvironment: "node",
transform      : JSON.stringify({})

This has indeed reduced memory usage but still not to the level where we could accept it.

After two days of memory profiling and trying different things, I have just switched to mocharunner since our tests were primarily E2E tests (no typescript, no babel, Node 12) making request to API it was fairly simple:

  • change test to it
  • change beforeAll to before
  • change afterAll to after

After deploying this, tests have been running with stable memory usage of 70MB and never going above while with Jest it was peaking at 700MB. I am not here to advertise mocha (my first time using it to run tests) but it literally just worked so if you have fairly simple test suites you could try changing your runner if you want to run tests programmatically.

experiencing the exact same issue with jest+ts-jest on a nestjs project

event the simplest test is reporting 500mb of heap size.

 PASS  tier5-test/one.get.spec.ts (7.304 s, 596 MB heap size)
describe('one', () => {

  it('one', async () => {
    expect(2).toEqual(2);
  });
});

v27 seems to leak more memory. The test of my project never encountered OOM on v26, but it was killed on v27.

Found an article for how to use heap snapshot to debug jest memory leak here: https://chanind.github.io/javascript/2019/10/12/jest-tests-memory-leak.html I tried to use the same method to debug but didn’t find the root cause.

Even global.gc() does not help for me, still seeing heap size keeps growing for each test.

It’s great that some folks think the memory leak issue is somehow not a big deal anymore, but we’re at jest 27 and we have to run our builds at Node 14 even though we will ship with Node 16 so that our test suite can finish without running out of memory. Even at Node 14, as our test suite has grown, we struggle to get our test suite to run to completion.

And that’s exactly my point in potentially closing this - that has next to nothing to do with the reproduction provided in the OP. Your issue is #11956 (which seemingly is a bug in Node and further upstream V8).

However, it seems the OP still shows a leak somewhere, so you can rest easy knowing this issue won’t be closed. 🙂


If it’s an issue for you at work, any time you can spend on solving this (or at least getting more info about what’s going on) would be a great help. It’s not an issue for me at work, so this is not something I’m spending any time investigating - movement on this issue is likely up to the community. For example gathering traces showing what is kept in memory that could (should) be GC-ed. The new meta issue @StringEpsilon has spent time on is an example of great help - they’re probably all a symptom of the same issue (or smaller set of issues), so getting the different cases listed out might help with investigation, and solving one or more might “inadvertently” solve other issues as well.

I have tested the reproduction with jest 24, jest 27 and jest 28 beta:

Version –runInBand min heap size max heap size
24.9.0 true 53 MB 259 MB
24.9.0 false 47 MB 61 MB
27.5.1 true 36 MB 71 MB
27.5.1 false 26 MB 30 MB
28.0.0-alpha.6 true 38 MB 73 MB
28.0.0-alpha.6 false 27 MB 36 MB

(All tested on node.js v14.15.3)

I think in general the leak has become less of an issue, but the discrepancy between --runInBand=true and --runInBand=false suggests that there is still an issue.

See also: #12142 (leak when using --runInBand) #10467 (duplicate of this issue) #7311 (leak when using --runInBand) #6399 (leak when using --runInBand)

As for the cause, from other issues relating to leaks, I suspect that there are multiple issues playing a role. For example:

#6738 [Memory Leak] on module loading #6814 Jest leaks memory from required modules with closures over imports #8984 jests async wrapper leaks memory #9697 Memory leak related to require (might be a duplicate of / has common cause with 6738?) #10550 Module caching memory leak #11956 [Bug]: Memory consumption issues on Node JS 16.11.0+

And #8832 could either be another --runInBand issue or a require / cache leak. Edit: It seems to be both. It leaks without --runInBand, but activating the option makes the problem much worse.

There are also leak issues with coverage, JSDOM and enzyme #9980 has some discussion about that. And #5837 is directly about the --coverage option.


Addendum: it would probably be helpful to have one meta-issue tracking the various memory leak issues and create one issue per scenario. As it currently stands, all the issues I mentioned above have some of the puzzle pieces, but nothing is tracked properly, the progress that was made isn’t apparent to the end users and it’s actually not easy to figure out where to add to the conversation on the general topic. And it probably further contributes to the creation of duplicate issues.

This is a dummy post to report this issue is still present and makes TDD harder so I’ll look forward to any solution

There must be something else wrong because I’m currently using Jest v23.6 and everything works fine, no memory leaks, no anything.

If I upgrade to latest Jest then the memory leaks start to happen, but only on the GiLab CI runner. Works fine locally.

For those wanting to get their CI pipeline going with jest@26, I found a workaround that works for me. (this issue comment helped, combined with this explanation). I increased the maximum oldspace on node, and although the leak persists, my CI pipeline seems to be doing better/passing. Here my package.json input: "test-pipeline": "node --max-old-space-size=4096 ./node_modules/.bin/jest --runInBand --forceExit --logHeapUsage --bail",

What else I tried and scraped together from a few other issues:

  • used the above fix: exposed garbage collector (i.e. node --expose-gc ./node_modules/...) && used the afterEach (did nothing)
  • inspected the port where my my server was running (from here, increasing heap seemed invisible to the inspector, while at the same time responding to changes)
  • patched graceful-fs with this. Probably taken from this issue, but it did nothing

For those reading along at home, this went out in 24.8.0.

I’ve observed today two unexplained behaviours:

  1. There’s too much memory usage even when disabling code transforms and cleaning jest’s cache
  2. When using --forceExit or --detectOpenHandles (or a combination of both), the memory usage drops from 1.4GB to roughly 300MB

I don’t know if this is specific to our codebase or if the memory leak issue is tied to tests that somehow don’t really finish/cleanup properly (a “bug” that detectOpenHandles or forceExit somehow fix)

Same here, my CI crashes all time

After updating to 24.6.0, we are seeing the similar issue running our CI tests. When logging the heap usage, we see an increase of memory usage after each test file.

I wonder, why isn’t it possible for Jest to spawn a process for each test file, which will guarantee that memory will be freed? Ok, it can be slower, of course, but in my case - it’s much better to be slower rather than get a crash from out-of-memory and be blocked to use Jest alltogether…

Maybe an option? Or a separate “runner” (not sure if I understand architecture and terminology right)?

Is it architecturally possible?

Or, will Node-experimental-workers solve it?..

It seems there is a suggested fix/workaround for Jest as per this comment: https://bugs.chromium.org/p/v8/issues/detail?id=12198#c20

Hopefully this makes more sense to someone in the Jest team … is this something that could be persued? It seems the first suggestion is for node itself but for jest they are asking if it’s possible to remove forced GCs. I gotta admit I don’t know the detail.

Did Victor’s suggested workaround work for Node? Updating from above, it would be to change https://source.chromium.org/chromium/chromium/src/+/main:v8/src/heap/heap.h;l=1460;drc=de8943e4326892f4e584a938b31dab0f14c39980;bpv=1;bpt=1 to remove the is_current_gc_forced_ check.

In general it’s my understanding that --exposed-gc is primarily a testing feature and shouldn’t be depended upon in production. Is it not possible to remove forced GCs from how jest runs?

I used the above scenario to create a case where the heap increase is more noticable:

npx jest --logHeapUsage --runInBand
 PASS  __test__/test_1.test.js (13.524 s, 221 MB heap size)
 PASS  __test__/test_2.test.js (188 MB heap size)
 PASS  __test__/test_9.test.js (235 MB heap size)
 PASS  __test__/test_8.test.js (265 MB heap size)
 PASS  __test__/test_7.test.js (306 MB heap size)
 PASS  __test__/test_6.test.js (346 MB heap size)
 PASS  __test__/test_10.test.js (13.586 s, 548 MB heap size)
 PASS  __test__/test_4.test.js (578 MB heap size)
 PASS  __test__/test_3.test.js (620 MB heap size)

and

npx jest --logHeapUsage 
 PASS  __test__/test_7.test.js (54 MB heap size)
 PASS  __test__/test_2.test.js (54 MB heap size)
 PASS  __test__/test_4.test.js (55 MB heap size)
 PASS  __test__/test_9.test.js (53 MB heap size)
 PASS  __test__/test_3.test.js (53 MB heap size)
 PASS  __test__/test_8.test.js (53 MB heap size)
 PASS  __test__/test_6.test.js (54 MB heap size)
 PASS  __test__/test_10.test.js (7.614 s, 196 MB heap size)
 PASS  __test__/test_1.test.js (7.619 s, 197 MB heap size)

(28.0.0-alpha.6)

Each test is just

for (let i = 0; i < 50000; i++) {
	describe("test", () => {
		it(`tautology #${i}`, () => {
			expect(true).toBeTruthy()
		})
	})
}

I also noticed that adding the extra describe() makes the heap grow faster:

  • --runInBand: 620 MB peak with and 500 MB peak without
  • parallel: 208 MB peak with and 162 MB peak without

I can’t really say, based on the reproduction. I do see an increase test over test on the heap, but it’s completely linear and not the saw-tooth pattern I see on production repositories, so I think node just doesn’t run GC.

But I did find that running a single test file with a lot of tests seems to still leak:

for (let i = 0; i < 100000; i++) {
  test(`tautology #${i}`, () => {
    expect(true).toBeTruthy()
  })
}

I had a heap size of 313 mb with that (w/ --runInBand).

Running the test with 1 million iterations yields a heap size of 2.6 GB. Beware that testing that takes a while (276 seconds).

Edit: Okay, this particular kind of leak seems to happen without --runInBand too.

Having same issue with ts-jest, the graceful-fs tip didn’t work for me

Is there any progress on this ? I still encouter this problem even with the most simple test suites (with and without ts-jest)

@unional if you’re on Circle, make sure maxWorkers isn’t higher than the CPU allotted you by Circle.

EDIT: To be clear, you should proactively specify maxWorkers at or below the CPU allotted you Circle.

From a preliminary run or two, it looks to me like going back to 16.10 is resolving these errors for us as well. Is there any more clarity on why this is, or what a real fix might look like?

Wow - this one got us too … after scouring the internet and finding this … reverting to 16.10 fixed our build too (in gitlab, docker image extending node:16 changed to node:16.10). Here’s hoping there’s a longer-term solution, but many thanks for the suggestion!

Bingo. This seems to be the fix alone with making sure to mock any external dependency that maybe no be your DB. In my case I was using a stats lib and bugsnag. When using the createMockFromModule it seems to actually run the file regardless so I ended up just mocking both along with running using NODE_OPTIONS=--max-old-space-size=6144 NODE_ENV=test && node --expose-gc ./node_modules/.bin/jest -i --detectOpenHandles --logHeapUsage --no-cache

@fazouane-marouane thanks so much… this one comment has legit saved the day.

For the record I use ts-jest. Memory Leak is gone!

a few colleagues who are on a mac don’t seem to be able to replicate this bug. could it be linux specific?

I noticed a significant difference in the heap size on OSX vs Docker Node image after exposing the node GC. While the heap kept around ~400MB on the OSX it still climbed to 1300MB in the Docker container. Without exposing the GC the difference is negligible. So there might be some difference in how the GC works on different platforms.

a few colleagues who are on a mac don’t seem to be able to replicate this bug. could it be linux specific?

Definitely not, I’m also on OSX, and it happens left and right.

This should help: https://github.com/facebook/jest/pull/8282

Will be released soon.

My tests are also leaking massively on CI but the exact same setup locally doesn’t really leak (much at least).

It’s so bad, I’m considering disabling tests on CI until I can make sense of what the difference is beside the OS. ):

Bueller?

I found this out recently but you can use the Chrome console to debug Node scripts! You can try using the Chrome console to profile Jest while it’s running to try and dig into the issue.

I believe the command is: node --inspect ./node_modules/.bin/jest --watch -i. When running, open Chrome and go to about:inspect. You should then see the running Node script.

Is there any more clarity on why this is, or what a real fix might look like?

All the info on the regression that specifically affects node >= 16.11 is found in this issue: https://github.com/facebook/jest/issues/11956

Same issue for me. Downgrading to node 16.10 fixed the memory leak with Jest. I was seeing 3.4GB heap sizes with node 16.14, down to ~400MB with node 16.10.

Same issue here. For me, it seems that the problem is in the setupFilesAfterEnv script.

It’s better with node 16.10, but it still arrives at 841 MB heap size (580 tests)

I use Jest for integration testing, it can be complicated to find the source of the memory leak (maybe related to a graceful teardown problem in my test suite, not a Jest issue).

I use this workaround to avoid OOM using matrix on Github actions.

name: Backend

on:
  pull_request:
    branches:
      - master
    types: [opened, synchronize, reopened, unlabeled]

jobs:
  buildAndLint:
    timeout-minutes: 10
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [16.x]

    steps:
      - uses: actions/checkout@v2
      - run: yarn build && yarn lint
  Test:
    needs: [buildAndLint]
    timeout-minutes: 30
    runs-on: ubuntu-latest

    strategy:
      matrix:
        node-version: [16.x]
        # Each app folder test will be run in parallel using the matrix
        folder: [adapters, auth, cloud, customSchema, miscTest, schemas, utils]

    steps:
      - uses: actions/checkout@v2
      # Improve this to use github artifact
      - run: yarn build:fast
      # Jest will only run test from the folder
      - run: yarn test ${{ matrix.folder }}
        env:
          NODE_ENV: TEST

This script could be improved to upload each LCOV result in a final job and then merge all coverage results into one using nyc merge see: https://stackoverflow.com/questions/62560224/jest-how-to-merge-coverage-reports-from-different-jest-test-runs

You need nock.restore(); in afterAll, so nock is self-removing from node:http, otherwise it will activate again and again

eg: nock > nock > nock > nock > node:http

https://github.com/renovatebot/renovate/blob/394f0bb7416ff6031bf7eb14498a85f00a6305df/test/http-mock.ts#L106-L121

 // nock in used directly from import no local scope is declared
  import nock from 'nock';
  
  beforeEach(() => {
    nock.cleanAll();
    // More unrelated cleanup stuff
  });

  afterEach(() => {
    // More unrelated cleanup stuff
  });

  afterAll(() => {
    nock.cleanAll();
    nock.restore();
    // More unrelated cleanup stuff
  });

We at renovate solved the major issues by correctly disable nock after each test and run jest with node --expose-gc node_modules/jest/bin/jest.js --logHeapUsage

I went trough the pull requests in your repo but can you give some more information on this? I have been having issues with large heap memory and flaky tests in CI only and I use nock. Any help is appreciated.

This is currently the way I manage nock in tests:

  // nock in used directly from import no local scope is declared
  import nock from 'nock';
  
  beforeEach(() => {
    nock.cleanAll();
    // More unrelated cleanup stuff
  });

  afterEach(() => {
    nock.restore();
    nock.activate();
    // More unrelated cleanup stuff
  });

  afterAll(() => {
    nock.cleanAll();
    // More unrelated cleanup stuff
  });

I have a very similar report: https://github.com/jakutis/ava-vs-jest/blob/master/issue/README.md

TLDR: jest uses at least 2 times more memory than ava for same tests (jsdom/node)

I have Jest 24.8.0 and #8282 doesn’t seem to help. Also --runInBand only helps a bit (4 GB instead of 10 GB 😮).

Pleaaaaaaase fix this …

How soon? )':