jest: Jest performance is at best 2x slower than Jasmine, in our case 7x slower
š Bug Report
Weāve been using Jest alongside Jasmine for the same test suite for about a year now. We love Jest because itās developer experience is superb, however, on our very large monorepo with ~7000+ test specs, Jest runs about 7 times slower than Jasmine. This problem has been getting worse and worse as the test suite grows and as a result, we always run our test suite via Jasmine and only use Jest for development --watch mode.
We would ā„ to use Jest as our only test runner, but its poor performance is preventing us from doing so. Having to run both Jest and Jasmine runners requires painful CI setup and constant upkeep of the Jasmine environment setup (which is much more complex than Jestās).
Iād like to better understand why the performance difference is so significant and if thereās anything that can be done to optimize it.
To Reproduce
Iāve created a very detailed project to reproduce and profile both Jest and Jasmine on the same test suite in this project: https://github.com/EvHaus/jest-vs-jasmine
The environment is the same. The configurations are very similar. Both use JSDom. Both use the same Babel setup. Additional instructions are contained therein.
Expected behavior
Running tests through Jest should ideally be as fast as running them through Jasmine.
Link to repl or repo (highly encouraged)
https://github.com/EvHaus/jest-vs-jasmine
Run npx envinfo --preset jest
Tested on a few different platform. See https://github.com/EvHaus/jest-vs-jasmine README for more info.
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 176
- Comments: 46 (14 by maintainers)
Same issue here on 25.2.2, file resolution takes too long. Is there any plan to speed it up?
Just updated my benchmarks with a new player in town ā Vitest. I have good news. It has a compatible API to Jest but in my benchmarks it ran 2x faster than Jest and even outperformed Jasmine. š®
Iām going to try migrating a larger real world codebase to it early in the new year and report back on the experience for those curious.
i was trying to do migration from mocha to jestā¦ andā¦ mocha is finishing all tests before jest starts first oneā¦ i think there is somewhere issue with resolving/reading files -> my project contains ~70k files, and iām running ~19k tests.
after some digging its looks like jest is trying to import all files from all folders before he starts tests, iām providing explicit match for test file:
testMatch: ['<rootDir>/dist/alignment.spec.js']
.i was able to run tests by adding to jest.config
but itās still 11mā¦ as opposed to mocha ~1m and without test framework (try/catch assert) ~40-50s
turning off transformation helped to
so far my configuration looks like this:
its still slow, ~4min
now iām looking for way to turn of prettier, i donāt care about formatting errorsā¦
To āfixā imports overhead, Iāve written custom test runner. Itās using
posix fork()
to clone processes (more docs are in link) In our case, we reduced our test run from 18 to 4.5 minutes, from which 1 minute is warmup and could be speed up by moving to swc/esbuild.https://github.com/goloveychuk/fastest-jest-runner
Am I right in saying the problem is that jasmine loads all specs into one process and runs it, where as jest creates a new mini-environment per test suite? We see exactly the same issue and profiling seems to show a significant amount of time resolving files and parsing javascript - unfortunately the multi-core aspect canāt make up for this. I have no idea why resolving is so slow. We made significant speed increases by trying to make suites import the least number of files, but weāve hit a wall on going further in that direction as we in many cases want to test multiple components running together and not to mock every dependency. I planned to do some more profiling and it would be great if anyone on the core jest team can point in any directions to things they would like to see.
@goloveychuk Interesting idea, but your solution didnāt seem to make a significant difference in my benchmark. š¢ Iāve added it to https://github.com/EvHaus/jest-vs-jasmine/.
Native Jest
Your approach
Jasmine (for comparison)
Iāve updated my repo with the latest benchmarks, latest version of Jest, latest version of Node and a more reproducible benchmarking tool (via
hyperfine
). Overall, Iām still seeing Jest performing at least 3x slower than Jasmine. So nothing has really changed since the original post.FYI: Iām not complaining. Just want to ensure those subscribed to the thread know that no significant advancements have been made here yet in the latest versions.
Hey folks, Iāve done an investigation run on my own with a no-op test and a lot of imports (
requireModuleOrMock()
is being called ~12500 times!). Most of my files in this test are TypeScript.Ignoring
jest
init time (by strictly measuring the 2nd test of ajest --watch ...
), this no-op test takes~1.5s
. Hereās what Iām seeing thatās causing that:~625ms
is spent doinggetModuleID()
which does some expensive FS work to find the absolute location of the module - iteratively check dirs forpackage.json
, check if thereās aliasing properties in it (resolveExports
), then find the actual module itself, and do someresolve
calls as well. SincegetModuleID()
is called once per module (`~26000 calls), these FS operations add up.~450ms
is spent in_execModule()
(excluding the actual invocation of the module, of course).~60ms
is spent intransformFile
: read the file, hash it, check if hash matches local cache~350ms
is spent increateScriptFromCode
: I think this is Node VM shenanigans requiring a bunch of work to happen on the script before it can be interpreted āfor realā.~400ms
leftover, but I think that it can be explained by interpretation time of the imported modules themselves - there may be other wins in here, but theyāre going to have depreciating returns.So, a couple recommendations for things to look into next:
getModuleID
to see if a fileās been changed? Of course, if file watching isnāt available (nowatchman
, etc), then fall back to the current slow modepackage.json
aliasing, so we donāt need to load and parse it from scratch every time. Of course, this would still need to be invalidated if thepackage.json
is changed (hopefully file watching can help us with this)package.json
doesnāt exist at <subdirectory> in a previous import, perhaps we can avoid astat
when importing a future module. Maybe this is already happening and I missed it šcreateScriptFromCode
output between test invocations somehow? I wonder if it is Node-VM-specific (so, each time we create a new sandbox, we need tocreateScriptFromCode
again).watchman
, even if we arenāt in--watch
mode. Maybe this can be skipped?resolve()
/realpath
/general FS operations ingetModuleID()
? Perhaps some of them are redundant š¤mtime
to determine if a file is changed on re-runs. This sidesteps hashing all input files, which is a big win.If I have time, I may be able to dig into some of these potential perf-gain options in the next few months, but no guarantees. I wanted to brain-dump here in case any other Jesters and Fools got inspired š
(note that I do have some low-hanging-fruit PRs that Iāll be upstreaming, but none of them address the remaining code hotspots mentioned above).
Things weāve done to increase the performance of jest in our setup:
I think itās a fair assumption to say itās the module resolution thatās taking time. While
require('foo');
is an in-memory cache lookup for jasmine (after the first one), every single test file in jest will have to do full resolution, and execution, offoo
and all its dependencies. I doubt itās the resolution itself that takes significant time (we should have the fs in memory (after the first run, at least)), but executing the files probably takes up a significant chunk of time.Another difference is that jest executes your code inside the jsdom vm, while with jasmine youāve just copied over all the globals to the node runtime (https://github.com/jsdom/jsdom/wiki/Donāt-stuff-jsdom-globals-onto-the-Node-global), which will always be quicker as you skip an entire abstraction layer (https://nodejs.org/api/vm.html).
That said, I agree itās really not ideal (to put it mildly) that Jest is about twice as slow as jasmine. Iām not really sure what we an do, though. We could try to cache the resolution (although weād still have to run through the entire tree in case thereās been any module mocking) which might allow us to not resolve modules by looking around, but again the FS should be in memory, so I doubt itād have much impact.
@cpojer @mjesun @aaronabramov @rickhanlonii do you think thereās anything clever we can do here? Or any awesome ways of profiling what we spend our time on?
Also, thank you so much for setting up a great reproduction case @EvHaus!
I have simular performance issues, our tests are running at least 5x slower.
Mocha takes one second. Jest takes 12 seconds. So I removed the Jest from my project.
I did some profiling of the node processes while running Jest on my projects it seemed like requiring was one of the most time consuming tasks. At least that was the case on Windows (not WSL), which I found to be substantially slower than Linux, especially in watch mode. Granted, Iām not particularly confident in my understanding of the node profilerās output, but thatās what it looked like. I saw the same thing with this reproduction.
Any news on this?
@EvHaus yea, I think it wonāt have a difference in this benchmark. More info about my setup/project.
In this setup I have 531s default, and 226s with above optimisations. Most important trick is to clean memory before each test file, and it will help only if youāre using many workers and your tests are taking much memory. In my example - 8 workers * 2 memory, with 16gb total, system is going to swap. And you have this āeffectā, when at start jest is running pretty fast, and after several test files itās slowed down. If you have such a symptoms - mb gc clean will help you.
So answering on your comment - those optimisations could help on real world heavy projects, it cannot make jest same speed as jasmine, since jest have expensive runtime (think of all those features/overhead: mocks, transformers, reporters, error formatting, tests isolation, caching etc)
also interesting is this, watch mode is three times slower than non watch mode even with the same amount of workers. (35s vs 11s)
tracked it down to the passing of to
rawModuleMap
in_createParallelTestRun
of jest-runner, it seems like not passing the rawModuleMap is faster for some reason, note that in my case,test.context.moduleMap.getRawModuleMap()
always returns{ duplicates: {}, map: {}, mocks: {} }
I was intrigued by the 2.5x speed increase mentioned from using a dot reporter, so I gave it a go.
Added
verbose: false
andreporters: ['jest-dot-reporter']
to the config. On our giant main repo it only offered about a 15% performance improvement (260s instead of 300s to run all tests). Thatās small but something. And on the test repo it didnāt seem to make any difference at all (probably because it doesnāt have enough specs for the reporter change to make an impact).