jest: Jest without --runInBand slow under Travis
Do you want to request a feature or report a bug?
Bug
What is the current behavior?
Tests run fine locally or with -i
under travis.
However running without -i
in travis, tests take much longer (timing out).
If the current behavior is a bug, please provide the steps to reproduce and
either a repl.it demo through https://repl.it/languages/jest or a minimal
repository on GitHub that we can yarn install
and yarn test
.
- Locally tests (with
--ci
) takeTime: 7.319s, estimated 8s
- Locally with
--ci -i
tests takeTime: 16.517s, estimated 21s
- Tests on travis-ci with
-i
completes in 67 seconds: https://travis-ci.org/fengari-lua/fengari/jobs/366723336#L378 - Tests on travis-ci without
-i
take over 20 minutes and travis-ci times out: https://travis-ci.org/fengari-lua/fengari/jobs/366419164#L540
What is the expected behavior?
Tests shouldn’t take > 20 minutes to run under travis when they take < 10 seconds locally.
Please provide your exact Jest configuration
No configuration. See repository for any details. https://github.com/fengari-lua/fengari/compare/v0.1.1...ebf18e2
Run npx envinfo --preset jest
in your project directory and paste the
results here
I don’t think this is the output you wanted…
$ npx envinfo --preset jest
npx: installed 1 in 1.326s
(node:11650) UnhandledPromiseRejectionWarning: TypeError: Cannot read property '1' of null
at e.darwin.process.platform.linux.process.platform.c.run.then.e (/home/daurnimator/.npm/_npx/11650/lib/node_modules/envinfo/dist/cli.js:2:94055)
at <anonymous>
(node:11650) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:11650) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 1
- Comments: 37 (16 by maintainers)
Commits related to this issue
- .travis.yml: Try --maxWorkers=2 instead of -i As recommended in https://github.com/facebook/jest/issues/5989#issuecomment-381431440 — committed to fengari-lua/fengari by daurnimator 6 years ago
- .travis.yml: Use nproc to limit number of jest workers See https://github.com/facebook/jest/issues/5989 — committed to fengari-lua/fengari by daurnimator 6 years ago
- Test for https://github.com/facebook/jest/issues/5989 — committed to dawnmist/dota-stats-ui by dawnmist 4 years ago
- Test for https://github.com/facebook/jest/issues/5989 — committed to dawnmist/dota-stats-ui by dawnmist 4 years ago
- Try using -i for CI. See https://github.com/facebook/jest/issues/5989 — committed to woocommerce/woocommerce-admin by samueljseay 4 years ago
- test: Re-add runInBand for Travis The recommendation is to use runInBand to speed up tests on Travis https://github.com/facebook/jest/issues/5989 — committed to cozy/cozy-banks by ptbrowne 3 years ago
- try to fix tests in travis.ci cf https://github.com/facebook/jest/issues/5989 cf https://github.com/fengari-lua/fengari/commit/a695a27ae5efe8b1fa380c3036cf2078a6b924ac — committed to manudss/akita-filters-plugin by manudss 3 years ago
It would nice if this were in the jest documentation as a behavior of CircleCI. I was getting the
Error: spawn ENOMEM
failures in CircleCI, but not locally, and the fix of runningjest --maxWorkers=2
worked perfectly, but it look almost an hour for me to figure that out, and the suggestion to add that flag that is buried deep in another (closed) bug about the memory problems.How so? It wasn’t fixed.
Yeah, that would be awesome. Might wanna reach out to the big CIs (travis, circle, appveyor, jenkins etc) and ask how to best find this information. And potentially get it into a module like https://www.npmjs.com/package/env-ci
@ChrisCrewdson
physical-cpu-count
reports 16 cores on Travis CI, which is 14 too many. Of courseos.cpus().length
reports 32, so it would be an improvement over what we have today, but still wrong. Might be worth it thoughI can shoot them an email asking about why
nproc
behaves weirdly.EDIT: https://circleci.com/ideas/?idea=CCI-I-578
We don’t want physical cpus (that’s the issue in the first place); we want available cpus inside of the container/cgroup/limitation in use. However for OSX, it looks like no such limiting ability exists (see e.g. http://jesperrasmussen.com/2013/03/07/limiting-cpu-cores-on-the-fly-in-os-x/). so we can probably get away with using the sysctl (though
sysctl -n hw.logicalcpu
might be a better choice)Yep.
--maxWorkers=$(nproc)
was successful: https://travis-ci.org/fengari-lua/fengari/jobs/366937403Can you use native APIs (or
/proc/cpuinfo
) to count the number of available CPU cores? However, that might not be the full story, as the process could probably beulimit
ed too.I wonder if Jest should detect common resource-limited environments (like Travis) and automatically reduce the number of workers.
@daurnimator I’ve typically seen happen from Jest trying to use too many workers (we see the 32 on the machine, but only 2 or four are given to the VM). Afaik we don’t have a way to see the number of CPUs given to the VM
Try updating the settings to use
--maxWorkers
with the number of CPUs Travis gives you