jest: Jest without --runInBand slow under Travis

Do you want to request a feature or report a bug?

Bug

What is the current behavior?

Tests run fine locally or with -i under travis. However running without -i in travis, tests take much longer (timing out).

If the current behavior is a bug, please provide the steps to reproduce and either a repl.it demo through https://repl.it/languages/jest or a minimal repository on GitHub that we can yarn install and yarn test.

What is the expected behavior?

Tests shouldn’t take > 20 minutes to run under travis when they take < 10 seconds locally.

Please provide your exact Jest configuration

No configuration. See repository for any details. https://github.com/fengari-lua/fengari/compare/v0.1.1...ebf18e2

Run npx envinfo --preset jest in your project directory and paste the results here

I don’t think this is the output you wanted…

$ npx envinfo --preset jest
npx: installed 1 in 1.326s
(node:11650) UnhandledPromiseRejectionWarning: TypeError: Cannot read property '1' of null
    at e.darwin.process.platform.linux.process.platform.c.run.then.e (/home/daurnimator/.npm/_npx/11650/lib/node_modules/envinfo/dist/cli.js:2:94055)
    at <anonymous>
(node:11650) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). (rejection id: 1)
(node:11650) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 37 (16 by maintainers)

Commits related to this issue

Most upvoted comments

It would nice if this were in the jest documentation as a behavior of CircleCI. I was getting the Error: spawn ENOMEM failures in CircleCI, but not locally, and the fix of running jest --maxWorkers=2 worked perfectly, but it look almost an hour for me to figure that out, and the suggestion to add that flag that is buried deep in another (closed) bug about the memory problems.

I think this can be closed now.

How so? It wasn’t fixed.

Yeah, that would be awesome. Might wanna reach out to the big CIs (travis, circle, appveyor, jenkins etc) and ask how to best find this information. And potentially get it into a module like https://www.npmjs.com/package/env-ci

@ChrisCrewdson physical-cpu-count reports 16 cores on Travis CI, which is 14 too many. Of course os.cpus().length reports 32, so it would be an improvement over what we have today, but still wrong. Might be worth it though

I can shoot them an email asking about why nproc behaves weirdly.

EDIT: https://circleci.com/ideas/?idea=CCI-I-578

sysctl -n hw.physicalcpu should work for macOS builds. Unsure what the Windows equivalent is.

We don’t want physical cpus (that’s the issue in the first place); we want available cpus inside of the container/cgroup/limitation in use. However for OSX, it looks like no such limiting ability exists (see e.g. http://jesperrasmussen.com/2013/03/07/limiting-cpu-cores-on-the-fly-in-os-x/). so we can probably get away with using the sysctl (though sysctl -n hw.logicalcpu might be a better choice)

I think nproc uses that syscall, which is part of coreutils, so that might be the easiest way?

Yep. --maxWorkers=$(nproc) was successful: https://travis-ci.org/fengari-lua/fengari/jobs/366937403

Afaik we don’t have a way to see the number of CPUs given to the VM

Can you use native APIs (or /proc/cpuinfo) to count the number of available CPU cores? However, that might not be the full story, as the process could probably be ulimited too.

Have you tried running with --maxWorkers=2 for example?

I wonder if Jest should detect common resource-limited environments (like Travis) and automatically reduce the number of workers.

@daurnimator I’ve typically seen happen from Jest trying to use too many workers (we see the 32 on the machine, but only 2 or four are given to the VM). Afaik we don’t have a way to see the number of CPUs given to the VM

Try updating the settings to use --maxWorkers with the number of CPUs Travis gives you