vitest: Vitest hangs tests, close timed out after 1000ms

Describe the bug

Started encountering our tests hanging with the message close timed out after 1000ms, a google search lead to this issue, tried everything in there with no success, for us is still hit or miss whether we get the error on not.

We get the error on different machines and also CI.

Reproduction

https://stackblitz.com/edit/vitejs-vite-brwl54?file=package.json

System Info

System:
    OS: macOS 12.4
    CPU: (10) arm64 Apple M1 Pro
    Memory: 133.73 MB / 16.00 GB
    Shell: 5.8.1 - /bin/zsh
  Binaries:
    Node: 16.14.0 - ~/.nvm/versions/node/v16.14.0/bin/node
    Yarn: 3.2.0 - ~/.nvm/versions/node/v16.14.0/bin/yarn
    npm: 8.9.0 - ~/.nvm/versions/node/v16.14.0/bin/npm
  Browsers:
    Chrome: 105.0.5195.102
    Firefox: 101.0.1
    Safari: 15.5

Used Package Manager

npm

Validations

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 51
  • Comments: 103 (35 by maintainers)

Commits related to this issue

Most upvoted comments

Does anybody have any updates about this issue?

So far there are only two known cases that can cause this. Both can be fixed by using pool: 'forks'. And by known cases I mean cases that Vitest maintainers have seen in reported minimal reproduction cases. Comments about hanging tests without reproduction cases do not provide much value as they don’t help anyone.

There are also some mentions about main thread hanging when third party Vite plugins are used: https://github.com/vitest-dev/vitest/issues/2008#issuecomment-1398976718. I have not been able to reproduce this.

I’ve realised that the issue comes with the using @nabla/vite-plugin-eslint. The quickest patch is to disable the plugin for tests.

/// <reference types="vitest" />

import eslintPlugin from "@nabla/vite-plugin-eslint";
import react from "@vitejs/plugin-react";
import { defineConfig } from "vitest/config";

export default defineConfig({
  plugins: [
    react(),
    // disable eslint plugin for tests
    // vitest issue: https://github.com/vitest-dev/vitest/issues/2008
    process.env.NODE_ENV !== "test" && eslintPlugin(),
  ],
  test: {
    globals: true,
    environment: "jsdom",
  },
});

Just node --inspect node_modules/.bin/vitest with chrome chrome://inspect/#devices waiting in background, see here. The problem with the method is one needs to be quick with opening the inspector window because once the process stalls, it seems impossible to hook up the inspector.

BTW the tests I reproduced this with are doing a bit fancy stuff like starting/stopping a HTTP server, so I think it’s definitely possibly that in this case it’s not a bug in vite itself, but it still smells like one because the exact same tests run fine in jest.

I happen to have one case where it reproduces locally, in a around 1 in 10 runs. When it happens, the node process goes to 100% cpu and hangs forever. I managed to hook up the inspector:

image

processReallyExit is from module signal-exit module which vitest has multiple dependencies on:

vitest@0.25.3
├─┬ @antfu/install-pkg@0.1.1
│ └─┬ execa@5.1.1
│   └── signal-exit@3.0.7
├─┬ execa@6.1.0
│ └── signal-exit@3.0.7
└─┬ log-update@5.0.1
  └─┬ cli-cursor@4.0.0
    └─┬ restore-cursor@4.0.0
      └── signal-exit@3.0.7

reallyExit is a undocumented node internal which apparently is being monkey-patched by signal-exit.

if it’s possible to just forcefully kill the hanging workers

There’s node:worker_threads.Worker.terminate. This is what Vitest makes Tinypool call:

But as explained on the description of https://github.com/nodejs/undici/issues/2026 and shown on the Logs & Screenshots section, NodeJS doesn’t terminate the worker in these cases. The only way to terminate the worker is to kill the whole main process outside Node. But even that requires forcing:

ari ~/repro  $ pgrep node
24498

ari ~/repro  $ kill  24498
ari ~/repro  $ pgrep node
24498
# ^^ Ok this one is really stuck

ari ~/repro  $ kill  -9 24498
[1]+  Killed: 9               node undici.mjs

ari ~/repro  $ pgrep node
# no output, process killed

We are planning to make pool: 'forks' default in Vitest V2, in https://github.com/vitest-dev/vitest/pull/5047. It will close this issue.

Still hangs for me on beta.6 even with threads: false.

vitest@1.0.0-beta.6 does not have threads option. It was removed in #4172. Use pool: 'forks' option instead.

Thank you @AriPerkkio, I can confirm I can reliably complete a full run with --pool forks --poolOptions.forks.isolate false.

Similar result over here

There are 296 handle(s) keeping the process running

# Tinypool

node:internal/async_hooks:202  
node:internal/async_hooks:505  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:36  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:57  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:734  
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:7328
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10628
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10643

# WORKER

node:internal/async_hooks:202  
node:internal/worker:186  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:500  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:485  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:475  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:739  
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:7328
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10628
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10643

# WORKER

node:internal/async_hooks:202  
node:internal/worker:186  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:500  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:485  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:475  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:739  
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:7328
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10628
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10643

# WORKER

node:internal/async_hooks:202  
node:internal/worker:186  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:500  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:485  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:475  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:739  
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:7328
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10628
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10643

# WORKER

node:internal/async_hooks:202  
node:internal/worker:186  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:500  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:485  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:475  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:739  
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:7328
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10628
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10643

# WORKER

node:internal/async_hooks:202  
node:internal/worker:186  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:500  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:485  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:475  
file:///Users/myusername/projectsfolder/myproject/node_modules/tinypool/dist/esm/index.js:739  
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:7328
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10628
file:///Users/myusername/projectsfolder/myproject/node_modules/vitest/dist/chunk-snapshot-manager.1a2dbf96.js:10643

# FILEHANDLE

node:internal/async_hooks:202

# FILEHANDLE

node:internal/async_hooks:202

# FILEHANDLE

node:internal/async_hooks:202

repeating # FILEHANDLE a lot.

Seeing this “close timed out after 1000ms” message on Drone CI as well. Test run still succeeds, and all tests are very simple unit tests, nothing fancy, nothing that would keep the event loop alive so I assume any open handles must be inside vitest itself.

Does unfortunately not reproduce locally, even with CI variable set.

We’re experiencing the same issue. Setting threads: false in vitest.config.ts solved the issue, as @fengmk2 indicated.

All of our tests are simple unit tests (99% synchronous). There are no open DB/Playwright/websocket processes that could be causing the hang.

Please, check if 0.27.1 fixes the issue. I also added hanging-process reporter that should collect all open processes and show them, if Vitest cannot close it:

// vite.config.ts
export default {
  test: {
    reportes: ['default', 'hanging-process']
  }
}

You can also write your own reporter to track all running processes (it doesn’t track processes inside a worker):

// vite.config.ts
const reporter = {
  onInit() {
     startTracking()
  },
  onProcessTimeout() {
     displayProcesses()
  }
}
export default {
  test: {
    reportes: ['default', reporter]
  }
}

Vitest 1.0 (beta.4 in my case) seems to solve this issue with the introduction of fork pools (it uses child_process instead of worker_threads in tinypool). Nothing hangs after I switched to it.

Hi all. Just thought I’d throw my experience into this in case this shines any light on anything at all. Was having the same issue myself and did some detective work. Posting in case this helps anybody who hits this issue.

Repo/PR is here: https://github.com/matrix-org/matrix-rich-text-editor/pull/727. We’re running vitest 0.23.4.

I had the issue with the hanging when running yarn test. In this case, yarn test corresponds to vitest --run without anything particularly controversial in either the vite config or the test.setup file.

I could get the tests to hang locally and they would almost always hang in CI if we ran coverage on them.

What I did was:

  • run yarn test locally and see how many times I could do it before it would hang the tests
  • remove all the tests from a file, see if that fixed it, repeat the above
  • remove all the tests from another file, see if that fixed it… etc
  • repeat the above process
  • I noticed that while it was still hanging, the more test files I removed, the more successful runs I could get before a run would hang
  • from there I guessed (bit of a leap) that the issues were improving as the number of threads was decreasing
  • so I tried running it with the --no-threads option
  • it turned out we had a hell of a lot of cleanup that needed doing with that option enabled
  • so I added the cleanup in the test.setup file
  • …and now it seems I can run my tests both locally without hanging, and in CI even with coverage enabled. We haven’t been able to do this for months

@mpayson The caching insight is something I’m running into also. If I run with cache: false or --clearCache, it passes consistently.

But running it after that leads to inconsistent failures and shows reports in hanging-processes for threads that can’t be closed.

I also thought it was possibly the undici and installed cross-fetch and set global.fetch = crossFetch, but the only thing that seems to work is disabling the cache for consistency.

In #3077 it was noticed that Node’s native fetch (and direct usage of undici npm package) can make node:worker_threads stuck and cause these process hangs even when running without Vitest or any other npm dependencies. Some reported projects here seem to be using Node’s fetch too.

As work-around you can replace fetch with node-fetch, cross-fetch or similar package during tests. These don’t run into the same problem as undici / Node’s fetch do.

Or even better, switch to pool: 'forks' and keep using native fetch.

There might be some sort of leak in tinypool. I fixed this for tRPC by enabling useAtomics trpc/trpc#3817

We just updated all of our repositories using vitest to version 0.28.3 and the issue still keeps appearing.

For context, we’re running tests via GitHub Actions on ubuntu-latest, Node 18.12.0.

We just updated vitest to 0.27.1 and added the hanging-process reporter, this is what it’s printing out when it hangs:

M1MPTb1c@2x

… it then repeats the # FILEHANDLE message a couple hundred times.

In our case, we had one component that was causing the hang. Since there isn’t very actionable messages about what causes the hang, we instead were able to track down which specific test suite caused the hang with the following bash script

#!/bin/bash

find ./src -type f \( -name "*.test.ts" -o -name "*.test.tsx" \) | while read current_file; do
  echo "Running tests for $current_file"
  yarn vitest run --mode development "$current_file"
done

It will still hang, but you’ll know which suite it is. Posting in case this is useful to anyone else, It’s pretty easy to debug from there (or work around with mocks).

Obviously, that will only find the first test suite to hang if there are multiple.

Still hangs for me on beta.6 even with threads: false.

vitest@1.0.0-beta.6 does not have threads option. It was removed in #4172. Use pool: 'forks' option instead.

The tinypool option added in tinylibs/tinypool#50 might be of interest to vitest.

This is now included in latest Vitest release 0.29.5. Vitest will now print the names of the test files that caused the process hanging. This should help when trying to identify what exactly is causing the issue.

close timed out after 10000ms
Failed to terminate worker while running /path/to/test.ts

For example in one of the reproduction repositories there were 613 test cases in 124 test files. There were 2 test files that caused the process hanging on about ~70% of test runs. Once these two test files were excluded from the test run, the process was always able to exit properly.

As something in these test cases was preventing the worker_thread from terminating properly, I tried the new poolMatchGlobs option and moved these 2 test files to be run on child_process instead. Now the process does not hang at all. I was able to run these +600 test cases 10 times in a loop without any process hangs.

poolMatchGlobs: [
  ['**/path/some.test.ts', 'child_process'],
],

I’ve seen the same type of error using coverage-istanbul.

This is really slowing down our pipeline and our local workflows, is there any way we can help to troubleshoot or debug? Not even sure where to start.

It means you start some process and don’t end it. Check you websocket/DB/playwright connections.

To remove timeout after tests, add teardownTimeout: null to your config

For me, this happens for any project that is not the first in the vitest.workspace.ts file. When I move the projects around, in the defineConfig array parameter, the first one works fine. All subsequent projects hang.

Still hangs for me on beta.6 even with threads: false.

vitest@1.0.0-beta.6 does not have threads option. It was removed in #4172. Use pool: 'forks' option instead.

Thank you @AriPerkkio, I can confirm I can reliably complete a full run with --pool forks --poolOptions.forks.isolate false.

What were you testing with these options? I am having issues using those options with the Testing Library. If I set poolOptions.forks.isolate=true, then the tests will stop failing.

TestingLibraryElementError: Unable to find an element by: [data-testid="load-more-button"]

Ignored nodes: comments, script, style
<body
  style=""
/>
 ❯ Object.getElementError node_modules/@testing-library/dom/dist/config.js:37:19
 ❯ node_modules/@testing-library/dom/dist/query-helpers.js:76:38
 ❯ node_modules/@testing-library/dom/dist/query-helpers.js:52:17
 ❯ node_modules/@testing-library/dom/dist/query-helpers.js:95:19
 ❯ src/view/components/__tests__/Table.test.jsx:122:28
    120|     />);
    121|     // When user clicks on the load more button
    122|     fireEvent.click(screen.getByTestId('load-more-button'));
       |                            ^
    123|     // Then should execute function to load more
    124|     expect(onLoadMore).toHaveBeenCalledTimes(1);

@AriPerkkio what about the reproduction I’ve put together in the comment above? I seem to be getting this issue, but its not related to any of the cases you’ve outlined

Also running into this issue. I have a GitHub Actions workflow that runs once daily, and it hangs and crashes about once a week. Here’s an example run.

It only runs a single test, which makes a simple API call using node’s fetch API.

I’m using GitHub Actions on ubuntu-latest with node 20 and vitest 1.0.1.

As mentioned in a previous comment, node’s fetch API is the likely culprit (nodejs/undici#2026).

I’m just commenting to add another example to the discussion, and to add myself to this thread for future updates.

I seem to have run into a similar error when mocking dependencies and calling expect(mockFn).toHaveBeenCalledWith(expect.anything()) specifically, the toHaveBeencalledWith triggers the Maximum call stack size exceeded.

you can see the test that’s failing here: https://github.com/lifeiscontent/fastify-realworld/blob/d837b90a0a749703379d9dd714d7f7df74769457/src/routes/api/articles.test.ts#L98

We ran into this issue too. For us the culprit was that we had a custom vite plugin which opened a chokidar file watcher with persistent set to true (https://github.com/paulmillr/chokidar#persistence). This plugin works fine when running vite in dev mode, we want it to keep looking for changes, but when running tests it also runs until we time out.

I threw a if (server.config.mode !== 'development') return; within the plugin’s configureServer function and it fixed the issue.

If you’re using vite plugins which include file watchers, this is something to look out for.

I don’t see the “Hook timed out” string anywhere within Vites source code, so I’m not really sure where this is coming from.

Hook timed out happens when your afterEach/beforeEach/beforeAll/afterAll takes too long to execute. You can configure the time with test.hookTimeout option. Vitest has nothing to do with it, it’s your code that is running.

In #3077 it was noticed that Node’s native fetch (and direct usage of undici npm package) can make node:worker_threads stuck and cause these process hangs even when running without Vitest or any other npm dependencies. Some reported projects here seem to be using Node’s fetch too.

Likely to be the case for me. I do use undici from npm and I think this usage correlates with these vitest hangs. I guess I will be using threads: false as a workaround until https://github.com/nodejs/undici/issues/2026 is fixed.

@beefchimi looks like your tests are hanging even when coverage is disabled: https://github.com/beefchimi/earwurm/actions/runs/4272366142/jobs/7437447530.

The fixes of @vitest/coverage-c8@0.29.0 only affect the cases where it was clearly seen that moving away from @vitest/coverage-c8 helped. If your tests are hanging even when @vitest/coverage-c8 is not used, the latest release won’t help.

Meaning that, the error does not appear when you disable the coverage completely?

Ah sorry no, I’ve seen this error with coverage disabled too. I have no idea what the problem could be…the error message is not very clear.

I added a --no-threads in my json script area and works for me, like:

"test:coverage": "vitest run --coverage --watch false --no-threads",

We had updated 0.26.2 -> 0.27.2 to use hanging-process reporter, but after that we have got consistent both CI and local hangs 😞

Still seeing it with esbuild 0.16.10, but I guess this issue has multiple potential causes, so the above update may have fixed one, but not all of them.

For me, it was on Node 18 with no coverage enabled.

@JoshuaToenyes same here! no hanging process for me and get the same error.