jest: Synchronous tests don't time out

🐛 Bug Report

jest.setTimeout(timeout) does not time out for synchronous tests.

To Reproduce

run

  it('times out', () => {
    const startTime = new Date().getTime();
    while (new Date().getTime() - startTime < 10000) {
      // wait
    }
  });

Expected behavior

The test times out.

Link to repl or repo

https://repl.it/@gitlab_winnie/jestsetTimeouttimeout-for-synchronous-tests

Run npx envinfo --preset jest

  Binaries:
    Node: 9.0.0 - ~/.nvm/versions/node/v9.0.0/bin/node
    Yarn: 1.7.0 - /usr/local/bin/yarn
    npm: 5.8.0 - ~/.nvm/versions/node/v9.0.0/bin/npm
  npmPackages:
    jest: ^23.5.0 => 23.5.0

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 6
  • Comments: 24 (8 by maintainers)

Commits related to this issue

Most upvoted comments

but I’m still not sure if enabling it by default would be useful for most users

@rubennorte Maybe we can create a separate follow-up issue to discuss whether this should become the default behavior? Then people can throw their 👍s and 👎s at it after trying it out. 😃

Personally I wouldn’t mind if it is not the default but having it at all would be a big help.

I am able repro this. I also was able to narrow it down (in my case) to nested long running for loops:

describe("timeout tests", () => {
    test('Times out properly', (done) => {
        setTimeout(done, 10000);
    }, 2000);

    test('Times out properly', (done) => {
        for (let i = 0; i < 937351770; i++) {
        }
    }, 2000);

    test('Does not time out at all, just spins', (done) => {
        for (let i = 0; i < 937351770; i++) {
            for (let j = 0; j < 937351770; j++) {

            }
        }
    }, 2000);
});

Yeah, good idea. And flip the default for a future major, maybe? Dunno.

@thymikee @rubennorte @rafeca thoughts on that?

The problem with my workaround from https://github.com/facebook/jest/issues/6947#issuecomment-418786554 is that it doesn’t work if you define an individual timeout for anything (afterAll, afterEach, beforeAll, beforeEach, describe.each, test, test.each, test.only)

@SimenB Would merging https://github.com/facebook/jest/pull/7074 again but behind a CLI / config option make sense to you? (something like timeoutSynchronous: true)

@SimenB given that https://github.com/facebook/jest/pull/7259 has been merged, should we reopen this issue? 😃

EDIT: And maybe add “breaking change” to the changelog, even though it’s technically a bug fix?

@SimenB I think this is a good idea. Even if it is a bug fix, it seems to be a bug that multiple projects relied on.

Could we also introduce a switch to disable the behavior (which may be deprecated in a future release)? I think that would give people more time to fix their long running tests.

Hey @SimenB , @gitlab-winnie!

After spending more time trying to integrate the new version of jest into the FB codebase and discussing it with a few teammates (cc/ @mjesun and @rubennorte), I’d like to suggest to add an option to jest to be able to control whether sync tests that take longer than the timeout should fail. Ideally this option should be disabled by default, at least for jest v24.

Context

I’ve been trying to upgrade jest in our mobile repository at FB and this feature has been causing a few issues:

  1. A bunch of sync tests have started failing.
  2. The list of failing tests is not constant: a lot of tests are now flaky because the time it takes to run them varies a lot between runs.

After some investigation, point 2 is caused by the transpilation of files: since jest transpiles files on the fly as they get required, if when running a test case jest needs to transpile all the dependencies the test is going to take much longer than when the transpiled dependencies are cached.

We’ve seen cases where transpiling the dependencies takes several seconds but the test only takes a few milliseconds to run, so these tests fail only when running without cache or from a worker that has not transpiled its files yet.

To be fair these same issues are already present on async tests due to the timeout, but in that scenario that’s a tradeoff to pay to be able to kill tests that would otherwise run forever due to unresolved promises, etc.

In the sync test scenario, failing when a test has took more than the timeout to run gives consistency with async tests, but it does not improve either developer experience of helps catching bugs: the test will still take the same amount of time to run, only jest will mark it as failed.

This is the main reason why we’d like to be able to not have this check when running. Related to this, I think that many other users will face the same issues that we’re facing at FB when upgrading jest (since these issues are not caused by our specific needs), so I believe that making this option opt-in will avoid some user friction until we figure out a better way to implement it.

Alternatives

Something we’ve discussed is to not count the transpilation times when computing the timeouts: This would remove the flakiness caused by transpilation cache hits/misses, but could cause some confusion since the timeout would not correspond anymore to the total time that it takes to run the test.

Any other alternative or way to improve this feature is appreciated 😃

I’m not sure about this. While technically correct, a lot of tests started randomly failing for us as flaky tests and took us hours to determine what caused the flakiness when upgrading Jest.

I think we can keep it for now, as it makes sense, but I’m kind of expecting a pushback and issues from the community in relation to tests that started failing after the upgrade to Jest 24.