jest: Jest 24 is slower than Jest 23.6.0 on the same test suite
š Bug Report
Filing bug per #7732 https://github.com/facebook/jest/issues/7732#issuecomment-460676324 Jest v24 takes 38 seconds vs v23 28 seconds to run 50 suites (296 tests).
To Reproduce
Steps to reproduce the behavior: Jest v23 setup: .babelrc
{
"presets": ["es2015", "react", "stage-3", "env"]
}
package.json:
...
"babel-core": "6.26.0",
"babel-jest": "22.4.1",
"babel-loader": "7.1.2",
"babel-preset-env": "1.6.1",
"babel-preset-es2015": "6.24.1",
"babel-preset-react": "6.24.1",
"babel-preset-stage-3": "6.24.1",
"jest": "23.6.0",
...
Jest v24 setup: .babelrc
{
"presets": ["@babel/preset-env", "@babel/preset-react"],
"plugins": [
"@babel/plugin-syntax-dynamic-import",
"@babel/plugin-syntax-import-meta",
["@babel/plugin-proposal-class-properties", { "loose": false },
"@babel/plugin-proposal-json-strings"
]
}
package.json:
...
"@babel/core": "7.2.2",
"@babel/plugin-proposal-class-properties": "7.3.0",
"@babel/plugin-proposal-json-strings": "7.2.0",
"@babel/plugin-syntax-dynamic-import": "7.2.0",
"@babel/plugin-syntax-import-meta": "7.2.0",
"@babel/preset-env": "7.3.1",
"@babel/preset-react": "7.0.0",
"jest": "24.1.0",
...
Jest config is not changed between v23 and v24:
module.exports = {
'verbose': false,
'coverageThreshold': {
'global': {
'branches': 40,
'functions': 45,
'lines': 50,
'statements': 50
}
},
'projects': [
'<rootDir>/src/test/js/reactapp'
],
'moduleDirectories': [
'src',
'node_modules'
],
'testURL': 'http://mockserver:9999',
'collectCoverage': true,
'coverageDirectory': '<rootDir>/build/coverage/',
'coverageReporters': [
'json',
'html'
]
};
Expected behavior
Jest v24 running performance matches v23.
Link to repl or repo (highly encouraged)
Not available
Run npx envinfo --preset jest
Paste the results here:
System:
OS: Windows 7
CPU: (4) x64 Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
Binaries:
Node: 8.12.0 - C:\Program Files\nodejs\node.EXE
npm: 6.4.1 - C:\Program Files\nodejs\npm.CMD
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 49
- Comments: 139 (43 by maintainers)
Commits related to this issue
- (rev) Revert "Bumping Jest and Babel versions" due to perf reg /reverts "d4d3a08daf612fa1c1f00540f2c8a667f92b12c3" see "facebook/jest#7811" — committed to ELLIOTTCABLE/excmd.js by ELLIOTTCABLE 5 years ago
All I want for Christmas is yā¦Jest 25 ā¤ļø
Any update on this by any chance?
@dpinol curious which version you ended up installing that worked? The version on the npm registry tagged as
25.0.0
appears to point to https://github.com/facebook/jest/commit/ff9269be05fd8316e95232198fce3463bf2f270e which is before the perf fix landed. Indeed looking at the code on unpkg for 25.0.0 (which is what you get when installingjest@next
), the fix isnāt applied: https://unpkg.com/browse/@jest/transform@25.0.0/build/ScriptTransformer.js@SimenB is there any version of jest (either v25+ or a patched v24) thatās available on the registry currently which includes this fix? While using something like patch-package is possible, Iād prefer not to fork or build a custom version given that the fix does already exist in master. If not, perhaps the simplest option from your end would be to update the
jest@next
tag to point to a new v25.0.1 or similar, and then developers willing to opt-in to an unstable build could at least do that?When is this going to be in a release?
If Jest 25 is delayed for some reason, would it be possible to release a patch for 24.9 with the fix? @SimenB
Last I heard it was being tested at FB on Friday.
Hi! I donāt want to add to the noise of this Issue, so Iāll keep it simple and add one more open-source reproduction:
Project with large code-size (in particular, a single 2-megabyte-ish generated lexer), where a single change of upgrading Jest 23.0.x to Jest v24.0.x caused test-times to jump from ~7 seconds (~5s per test-file), to ~147 (~70s for each individual test-file). Yes, a regression of 2,000%. š¤Æ
Project is ELLIOTTCABLE/excmd.js; commit ELLIOTTCABLE/excmd.js@f0982114 used Jest 23, and consistently reports <10s runtimes on
jest --clearCache && time jest
; the next commit, ELLIOTTCABLE/excmd.js@d4d3a08d, updates Jest to v24, and shows consistent cache-cleared timing of ā„100s.Again, so it doesnāt slip by ā I strongly suspect the performance regression has something to do with filesystem copying or module loading; because the thing that makes my project an outlier is definitely the loading of a single, hairy, two megabyte
.js
file. (An unrelated bug.)I also am not convinced itās related to Babel; Babel happily deoptimizes the styling of that file and continues on its merry way, showing a wall-clock build-time of
4.48s
.(Because Iām sure this issue is a stressful one (lots of complaining users, difficult to reproduce, and performance-related and thus difficult to debug ā¦), Iād like to add: we love Jest, and your hard maintenance work, even the triage where you say ānoā, is greatly appreciated. ā¤ļø)
any progress on releasing this?
A new release of 25 will come next week. You can build from master now and test it out if you want. After the release, if we still have regressions vs Jest 24, please open up a new issue. This issue has more than 100 comments, so itās getting a bit unwieldyā¦ Things weāve not yet tried that has been mentioned in this issue is to bundle up our modules to hit the FS less and look into what we do with stack traces.
Thank you to everyone who helped track this down!
@SimenB You said "A new release of 25 will come next week. " Any news about the release?
thanks
Thank you! Iāll dig into this later today. To verify my assumptions going in - thatās fast on 23, upgrading to 24 or 25 is slow, but applying the patch makes those comparable to 23 again?
What Iām seeing (over 10 runs with a warmup run for the cache)
I can also reproduce the hit vs miss in the caching. Thanks for the great repro @nol13! Iāll land the patch later today
On topic: Of the things mentioned in this issue that helps, weāve landed 2: weāve dropped Node 6 and upgraded Micromatch in the
next
branch. These things reduced test run time by about 10% for Jestās own tests (more on Windows). It also reduced memory usage by a fair bit. Weāll try to get an alpha release out soon so people can test it, but you can also checkout that branch locally, and build and link jest in your project to see how it goes. Instructions are inCONTRIBUTING.md
in the root of this repo. It should also be easier to profile Jest now as we no longer transpileasync-await
which makes the functions calls much easier to followIād like to look into why
pretty-format
(and other modules within the sandbox) is so heavy to require - its dependency tree isnāt that big. No promises though on when though!Off topic:
Yes, thatās what I was suggesting. Why donāt you want to help us figure out if those changes help?
As @csvan points out, the
name
thing has been fixed since 24.1.0 (the PR was linked to). If it makes a difference addingname
or not for releases after that, please open up a new issue as thatās an optimization you shouldnāt have to make.Jest always uses a cache. Passing in
--no-cache
just deletes the previous cache - Jest will still cache things within that run. We cache transforms and test results (so we know how long they took, and can prioritize when to run which tests).Like @csvan says - nobody is actually paid to work on Jest so any time spent is either spare time or time spent at day jobs instead of doing what weāre paid to do. Sometimes thereās an overlap between what my day job needs and things Jest can do better (we use Jest at work), but thatās not common. So when I do spend time on it, I prefer to work on things that I find enjoyable, which digging for performance regressions is not.
I can only speak for myself of course, but if the choice is between digging into performance regressions and doing nothing, Iāll choose to do nothing. Life is too short to spend spare time on things you donāt enjoy.
However, this project is open source, so anyone is more than welcome to dig into it and try to help the entire community by contributing fixes like @vitalibozhko and others has done. If itās an issue for you at work, maybe you can spend some time while at work trying to fix it?
@ronayumik Ever since Jest 22.4.4, as new versions of Jest are released, and when Iām interested in upgrading, I compare how long it takes to run our tests (412 test suites, 2015 tests).
The following gist includes several benchmarks using Jest 22, 23, and 24.
https://gist.github.com/deleteme/25fcca7cd16aceb30cc5f50e022aa706
So far, every new version has been significantly slower.
Fully support @tscislo about performance issue comparing jest23 and jest24. @SimenB, sorry for my directness, but every time somebody write here a comment about performance reduction you make a surprised face, after that question about open source/not and silence. If you missed my previous message - micromatch 4.0.2 yarn resolution doesnāt help.
I hopefully spotted some places that slows jest 24 down comparing to jest 22, hereās the link if someone is willing to try it out https://github.com/vitalibozhko/jest/tree/24.8.0-perf-spots (a couple of tweaks and some minor features disabled just to test performance, not a real fix) I get nearly the same time on this branch as it was on jest 22 on our private project setup.
To try this out just follow https://github.com/facebook/jest/blob/master/CONTRIBUTING.md#how-to-try-a-development-build-of-jest-in-another-project Iād also suggest to either run jest by means of package.json scripts or via
./node_modules/.bin/jest
to make sure the linked version is utilized (i personally use the latter). One way to verify the link is properly set is by running./node_modules/.bin/jest --version
in terminal - it should give24.8.0-dev
Weāve recently switched from Babel 6/Jest 23 to Babel 7/Node 24 and have noticed similarly stark speed regressions (from ~150s to ~200s with a clean cache and ~100s to ~160s cached). Doesnāt appear to be Babel 7 related, since running Babel compilation by itself is actually faster now.
Woah, 17s vs 113s, thatās huge! Any possibility of sharing that project?
I think we need to start really considering adding performance monitoring to Jest. @aaronabramov talked about this in #4471. Node comes with essentially the same performance API the browser does (available since node 8): https://nodejs.org/api/perf_hooks.html, maybe we can use that to measure? Having some way of emitting how long different things takes should be a first step probably (so #4471, but that only talks about what happens inside of tests)
Would it be possible to release a patch to version 24 while we wait for the version 25 release?
I dug into the repo provided by @ELLIOTTCABLE in https://github.com/facebook/jest/issues/7811#issuecomment-501876636, and the commit that made the test run go from 9s to 150s on my machine was #7170. However, if I remove
collectCoverage
from the config, 24.8.0 also completes in 9 seconds. So the change in that case is that we actually collect coverage from the files specified, which is expensive (especially for large files). Iāve got a branch switching to use V8 coverage (#8596) which should speed it up some, but it will always be expensive.Quick test:
With the bugfix that slowed it down, otherwise all code in jest is identical:
Thanks for the detailed information @lulubo! I think after we get 24 to back to where 23 was, we can look at getting back to where 22 was as well. Even if 24->23 did not make a big difference for you and your team, from this issue it seems like itās an issue for lots of people, so I think we should focus on that first.
Thatās still pretty bad š Still feel like something must be wrong with jest-runtime, but idk what
For people who want to use
patch-package
today, the following steps should work š¤Steps:
package.json
:Get
patch-package
in yourpackage.json
[doc]:$yarn add patch-package postinstall-postinstall
or$npm i patch-package
Go to the root of your project, and open file
node_modules/@jest/transform/build/ScriptTransformer.js
in your favorite editorUpdate code according to the diff:
jest 24.9
: https://github.com/facebook/jest/issues/7811#issuecomment-527347645jest 25
: https://github.com/facebook/jest/issues/7811#issuecomment-527107215yarn patch-package @jest/transform
ornpx patch-package @jest/transform
. If it works, you should seeCreated file patches/@jest+transform+24.9.0.patch
Now you can test the patch by
$rm -rf node_modules && yarn install
, andpatch-package
should patch the diff for you automaticallyMake sure to commit
patches/@jest+transform+24.9.0.patch
, so everyone gets the patch, including CI/CD pipeline!@SimenB your revert applied to 24.9.0 gets us back down to ~35s! So roughly what weāre seeing in 23.6.0.
Weāve released
jest@next
from the (now merged)next
branch, if anyone wanna test it out@deleteme what I did was delete jest from the repo I was checking, and just do
../jest/jest
- no linking needed. I also ranyarn && yarn add -D -W @babel/core babel-core@bridge --ignore-scripts
in order to avoid the babel-core version mismatch issue.Iām not sure if this should be filed as a separate issue at this point, but sadly, 25.1.0 doesnāt seem to have resolved the slowdown for us. We skipped version 24 because of the slowdowns we were seeing from 23.4.2. For the whole application (130 test files, 1104 tests) I see tests go from taking 17-18 seconds in v23 to taking 23-24 seconds in v25. Running the slowest test file in our application on its own went from taking 6 seconds in v23 to 8 seconds in v25.
On windows with node 10 and warm cache:
85s without patch 56s with patch
Ok seems to beā¦ https://github.com/facebook/jest/commit/8a7b4a9df0759142d148cf2f69ec3f23457c542e?
The difference between 22 and 23 wasnāt a deal breaker for us, so bisecting between 24 and 23.6:
Tested 24.8 vs @next on our test suite. Seems to be a slight improvement, especially on cold cache:
Jest 24.8, cold cache:
Test Suites: 3 skipped, 112 passed, 112 of 115 total Tests: 5 skipped, 494 passed, 499 total Snapshots: 5 passed, 5 total Time: 26.764s Ran all test suites. āØ Done in 27.88s.
Jest 24.8, warm cache:
Test Suites: 3 skipped, 112 passed, 112 of 115 total Tests: 5 skipped, 494 passed, 499 total Snapshots: 5 passed, 5 total Time: 20.792s, estimated 24s Ran all test suites. āØ Done in 21.35s.
Jest 25.0 (@next), cold cache:
Test Suites: 3 skipped, 112 passed, 112 of 115 total Tests: 5 skipped, 494 passed, 499 total Snapshots: 5 passed, 5 total Time: 24.008s Ran all test suites. āØ Done in 24.78s.
Jest 25.0 (@next) warm cache:
Test Suites: 3 skipped, 112 passed, 112 of 115 total Tests: 5 skipped, 494 passed, 499 total Snapshots: 5 passed, 5 total Time: 20.267s, estimated 22s Ran all test suites. āØ Done in 20.86s.
Thatās a horrendous regression š Is your project open source?
Have you tried the branch from https://github.com/facebook/jest/issues/7811#issuecomment-516718977?
Another thing that will probably help is to update our code to target node 8 - that will stop transpiling
async-await
. You can do that by changing this line: https://github.com/facebook/jest/blob/e76c7daa33a7665396679527bba7d0fdfeb2591d/babel.config.js#L30@SimenB
That looks correct
Iād say the most noticeable impact comes from requiring of āheavyā modules like pretty-format even though those are only needed under certain circumstances (e.g. spec uses snapshots, failed spec need to format the error, etc.). The changes made were the most obvious places from profile flame chart. In terms of CPU load there is one obvious thing -
getAllCoverageInfo
usingdeepCyclicCopyObject
(7% of total time), replaced with JSON.parse(JSON.stringify()), it fits just fine for coverage cloning - in case of my project it reduces total cloning time from 2.3 s to 0.4sPut the stack collection back - got plus 1-2s (15.5 > 16.5-17.x) to the total time on hot cache with in band execution. Test Suites: 66 passed, 66 total Tests: 16 skipped, 441 passed, 457 total Snapshots: 9 passed, 9 total Time: 17.398s
It might be worth trying to measure the retaining memory without stack evaluation in place, my guess is that it would only become an issue in watch mode.
As mentioned above itās not the execution time that makes difference, but the time to require a module
We ended up migrating to Mocha (+Chai and Sinon), which reported a significant improvement on the testing time.
Hi There, I would like to share my experience here with Jest. I like jest but the problem that Iām having it takes a lot of time to run 6 tests, it took 13.4 seconds
Also in the package.json
jest.config.js
Iām using it with typescript:
in my
sh test
folder, I have a file with 6 tests:Then I tried to use mocha
and I put mocha.opts inside ātestā folder
and in the package.json:
I ran the same tests (only I modifed the assertions to use sinon-chai), and it ran amazingly fast (about 1-2 second(s)),
Something is not good, and I donāt know what is the problem.
Jest v24:
Jest v23:
I was just about to post that a stable Jest 25 was just released, and youāve already tested it š
Fantastic numbers, thanks for sharing @csvan!
Hopefully today. We just need to make sure it works for FB (best test nothing is broken that exists š)
You can use
patch-package
in the meantime. A diff:Pretty low, but weāll try to get a stable 25 out. Itās currently a bit delayed due to issues rolling it out at FB (something with how they fake promises are broken). And making this part of 25 is pretty doable.
However, as mentioned, I was unable to get a difference in cache hits and misses when testing it out. Iād like to write a test for this when landing it. @nol13 can you share your jest config? What would be perfect would be a small repository we could pull down, and see it logging cache misses, but I very much understand if thatās a bit much to ask š
@SimenB Iām happy to help. I want to try again with the
yarn add -D -W @babel/core babel-core@bridge --ignore-scripts
trick.Sorry, I couldnāt tell.
@mucsi96 The contributing doc was helpful. Iām using hyperfine and a fish shell function to start the benchmarks, and a spreadsheet to record measurements.
Thatās great, thanks! Matches our results
Thought Iād pop in with a report on my jest performance adventures the past month.
837 suites, 16259 tests, Node v10 TypeScript back-end, universal app with Vue + Nuxt
ā¦and of course, a private repo š
Initially, these tests tookā¦ 4m30s w/cold cache or 3m20s w/hot cache on both bare Windows and WSL. Both cases were about 30s faster on MacOS or Ubuntu.
I know this is just anecdotal, but hereās stuff I triedā¦
runInBand
severely worsened the performance (this workstation has a Xeon with 20 virtual cores)maxWorkers=4
as the least-bad option.jsdom
wasnāt initially an option, since the code under test is a universal app, but see below.Ultimately, here are the things that helped my team:
jest-environment-jsdom-global
tojest-environment-node
. This shaved off another ~25% or so. (I also triednode
env but it gave us weird event loop stuff I didnāt have time to explore, and there was no discernible performance benefit over thejest-environment-node
)Hope this helps someone. Bottom line though: Rolling back to jest v22 was the biggest single gain. FWIW my colleagues were all experiencing the issues, and we run every OS with a variety of hardware configurations.
Bringing this up again: I suspect that Nodeās regular
require
has lots of optimization that we miss out on becauserequire
really comes fromjest-runtime
. This could perhaps be verified by looking at how long module trees take to load in regular Node vs the Jest runtime.We can definitely try out bundling our modules and see if it makes a difference! At least the ones that are required inside the sandbox. We currently bundle for browser, but that includes dependencies - here weād keep them external I think.
Not trying to be unappreciative, (thanks all contribs š) but just saying, if anyone is working on this, the performance regressions should probably be a priority vs. adding any new features. I imagine lots of others will also be staying on 22 or 23 for the time being.
Are the majority (any?) of code bases out there not running into this issue? Most people just donāt care?
@dmitriivikorchuk
Well, having access to a non-trivial project where the issue is readily reproducible sure does help in finding out why it happens. But what do I know.
As for the āsilenceā, pretty sure the team are not hired to work on Jest and are just as busy with their day-to-day jobs as you and me. Itās an open source project, nothing stops you from trying to figure the problem out for yourself.
We use jest to test our Angular 8 application with babel-jest and ts-jest. We used to run our 200 tests in less then 5mins on hest 23 and now with 24.9 it is over 20mins which is completely unacceptable. Iām using node 10. The time is greater for both cold start and also with cache.
Is there any workaround for now, anyone actively working on solving this issue? @SimenB @thymikee ? As it becomes more and more obvious that this performance issue starts to be a show stopper for most of the users.
Opting in sounds like a good idea - that way I can run jest fast on ci and more verbose locally when I really want to debug something.
Interesting, would you be able to post a reproduction in a new issue? This issue in particular was about a specific bug introduced in Jest 24 which has been fixed in #8890
Ran 24.9 vs 25.1 on a local test suite. Amazing work @SimenB and friends!
Jest 25.1
Cold:
Test Suites: 3 skipped, 218 passed, 218 of 221 total Tests: 12 skipped, 962 passed, 974 total Snapshots: 17 passed, 17 total Time: 46.692s Ran all test suites. āØ Done in 47.55s.
Warm:
Test Suites: 3 skipped, 218 passed, 218 of 221 total Tests: 12 skipped, 962 passed, 974 total Snapshots: 17 passed, 17 total Time: 32.527s, estimated 44s Ran all test suites. āØ Done in 33.65s.
Jest 24.9
Cold:
Test Suites: 3 skipped, 218 passed, 218 of 221 total Tests: 12 skipped, 962 passed, 974 total Snapshots: 17 passed, 17 total Time: 68.89s Ran all test suites. āØ Done in 70.14s.
Warm:
Test Suites: 3 skipped, 218 passed, 218 of 221 total Tests: 12 skipped, 962 passed, 974 total Snapshots: 17 passed, 17 total Time: 57.806s, estimated 65s Ran all test suites. āØ Done in 58.92s.
PR: #8890
Ya, itās a super stripped down repo, but the patch applied to 24.9.0 still takes it from 4.8s to 4.2s consistently and makes the cache hit.
https://github.com/nol13/jest-test-repo
^Hereās a minimal example. Without the patch, this hits with runInBand but misses without it.
I think this is a valid revert. All tests pass except for the ones added in the linked commit. But if it doesnāt make any difference, itās not worth trying for you.
Do you get any difference before and after that commit, btw? And of course, while it might not help in your case, it might very well work in other cases
Ergh yes, good point.
Keeping the logging you already do is probably a nice idea as well in case this makes no difference
@SimenB Node-gyp environment issues has delayed me from running jest@next, but I wanted to post some other results.
Windows 10, Node v10.14.2 Project with Typescript
Test Suites: 1492 total Tests: 9668 total
It pieced some other config changes based on discussions in this thread. Primarily:
ts
andtsx
at the front of the listJest 24.8 baseline, Cold Cache: 629 seconds Jest 24.8 baseline, Warm Cache: 296 seconds
Jest 24.8 w/config changes, Cold Cache: 445 seconds Jest 24.8 w/config changes, Warm Cache: 209 seconds
Compared to jest 23.6, I came to the following conclusions:
Once I fix my environment issues Iāll update with that result too.
@SimenB The test suite belongs to a private git repo.
I had difficulty using
yarn link
forjest-cli
because of a babel 6 vs 7 incompatibility issue. So instead, I attempted to do a git bisect of the jest repo by doingyarn add /path/to/jest/packages/jest
, between the v22.4.4 and v23.6.0 git tags.I suspect that because I wasnāt using
yarn link
, the technique of usingyarn add /path/to/jest/packages/jest
was not testing the actual changes of each commit, but whatever was the released version at that commit.Anyhow, the result of this bisect (flawed as it was), was that 614f739378d4f1757d8dc99219309553db5b9403 was good, and 7922488d676aa7bc5a6334699df220c7b001e299 was bad.
614f739378d4f1757d8dc99219309553db5b9403 had the about the same median test duration time (9.36s). 7922488d676aa7bc5a6334699df220c7b001e299 had a significantly slower median test duration time (15.47s).
I will try again to get
yarn link
working forjest-cli
.Whoops, sorry I forgot to mention that part. Our performance tests with v23 were only marginally better than v24. Iām sorry I canāt recall the exact numbers or percentage, but v22 was where we saw the significant gains.
Yeah, thatās a great point. We should definitely try to figure out if our
require
algorithm is missing any optimizations we could apply. If itās constructing themodule
object itself, the resolver, or something else.The
require
function is created here, if anyone wanna dig: https://github.com/facebook/jest/blob/d523fa84f2a65adf4d18019173dd44600c57bcad/packages/jest-runtime/src/index.ts#L884-L922@DavyJohnes early after 24 was released there was a cache-busting bug which required users to set
name
manually in their config to make sure cache worked as intended. To my knowledged that has been patched for some time now, see https://github.com/facebook/jest/pull/7746I ran a test on 24.9 vs 24.8 on one of our suites. No improvements unfortunately.
24.8, cold cache:
24.8, warm cache:
24.9, cold cache:
24.9, warm cache:
Thatās great @vitalibozhko! Looking at the diff, I see 3 main changes
Does that seems right? If yes, which parts had the most impact? Some comments on the changes:
Error
s are thrown and that code path is not activated.Iām going to look at the source code and do a few diffs to see what I can find. I may be able to at least get started with some help.
If anyone already started pls ping or reply in this thread, much love.
Will try the latest version(24.8.0) shortly and report back.
EDIT: Just as slow on 24.8.0. Anyone have a workaround? We have ~5500 tests and literally canāt run them on the latest version(dies in CI due to timeouts or takes all eternity(20+ mins) vs the 3mins it used to take).
@thymikee Iāve updated my benchmark gist to include a comparison of jest @ 24.8.0 with and without the micromatch 4.0.2 yarn resolution.
https://gist.github.com/deleteme/25fcca7cd16aceb30cc5f50e022aa706#benchmark-jest-2244-2480-with-micromatch-yarn-resolution-2480-without-micromatch-yarn-resolution
The fastest jest release has been
22.4.4
. All newer versions have been much slower.@thymikee Can we remove the Windows-tag here? This is definitely present on MacOS and Ubuntu as well.
Would be great if you could attach a repro that one can checkout and analyze.