terser-webpack-plugin: Since 2.3.0, webpack builds on CircleCI fail with "Error: Call retries were exceeded"

  • Operating System: ubuntu:18.04 (from CircleCI)
  • Node Version: v10.13.0
  • NPM Version: 6.10.0
  • webpack Version: 4.29.6
  • terser-webpack-plugin Version: >= 2.3.0

Expected Behavior

Building webpack JS bundles on CircleCI should be working the same for version 2.2.3 and versions >= 2.3.0.

Actual Behavior

Builds succeeds for terser-webpack-plugin v2.2.3 and are not working for terser-webpack-plugin >= 2.3.0 (tested with v2.3.0 and v2.3.1).

The error returned when running a webpack command such as: webpack --mode production -p --progress --config webpack.config.js with terser-webpack-plugin v2.3.0 produces this error:

ERROR in javascripts/bundle/feature.js from Terser
Error: Call retries were exceeded
    at ChildProcessWorker.initialize (/root/project/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/ChildProcessWorker.js:193:21)
    at ChildProcessWorker.onExit (/root/project/node_modules/terser-webpack-plugin/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:182:13)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:240:12)

and this happens for multiple of the bundles that webpack tries to build.

Code

// Here is the config I am using for Terser plugin:
  new TerserPlugin({
    cache: true,
    parallel: true,
    sourceMap: true, // Using source-map is intentional
  }),

How Do We Reproduce?

The project I am building has more than 30 JS bundles and few of them are rather heavy (the heaviest is 1.6MB, average size is 600KB). Providing a reproducible example would be complicated to simulate the high number of JS files used and the overall heaviness of the bundles (maybe the issue is triggered in the source-map generation step). But since the issue was introduced in a specific recent release (2.3.0), I hope that it will be easier to pinpoint the issue.

Please not that this issue is not reproducible on my local machine (a very recent MBP running Mac Mojave 10.14.6)

Thanks!

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 15 (11 by maintainers)

Most upvoted comments

This bug has been haunting us for some time in the https://github.com/automattic/wp-calypso project, so I did a deep dive trying to find the root cause.

And I figured out that the CircleCI container where the build runs is running out of memory, so the Linux kernel will select a “bad” process and kill it with SIGKILL (read here about how the most “bad” process is chosen).

The jest-worker library won’t report why the worker process has exited and will merely try to restart it. It gives up after a few attempts (default maxRetries is 3) and the result is the Call retries were exceeded error.

There are other errors you can get that are caused by the same condition. For example, I’ve also seen this:

internal/child_process.js:394
    throw errnoException(err, 'spawn');
    ^

Error: spawn ENOMEM
    at ChildProcess.spawn (internal/child_process.js:394:11)
    at spawn (child_process.js:540:9)
    at Object.fork (child_process.js:108:10)
    at ChildProcessWorker.initialize (/home/circleci/wp-calypso/node_modules/jest-worker/build/workers/ChildProcessWorker.js:137:44)
    at ChildProcessWorker.onExit (/home/circleci/wp-calypso/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:223:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12) {
  errno: 'ENOMEM',
  code: 'ENOMEM',
  syscall: 'spawn'
}

Here the attempt to restart the process fails: the kernel outright refuses to spawn a new process, because there’ not enough memory for it.

The jest-worker library could be improved to report failures better. I don’t think that the terser-webpack-plugin itself can do anything better.

This bug has been haunting us for some time in the https://github.com/automattic/wp-calypso project, so I did a deep dive trying to find the root cause.

And I figured out that the CircleCI container where the build runs is running out of memory, so the Linux kernel will select a “bad” process and kill it with SIGKILL (read here about how the most “bad” process is chosen).

The jest-worker library won’t report why the worker process has exited and will merely try to restart it. It gives up after a few attempts (default maxRetries is 3) and the result is the Call retries were exceeded error.

There are other errors you can get that are caused by the same condition. For example, I’ve also seen this:

internal/child_process.js:394
    throw errnoException(err, 'spawn');
    ^

Error: spawn ENOMEM
    at ChildProcess.spawn (internal/child_process.js:394:11)
    at spawn (child_process.js:540:9)
    at Object.fork (child_process.js:108:10)
    at ChildProcessWorker.initialize (/home/circleci/wp-calypso/node_modules/jest-worker/build/workers/ChildProcessWorker.js:137:44)
    at ChildProcessWorker.onExit (/home/circleci/wp-calypso/node_modules/jest-worker/build/workers/ChildProcessWorker.js:263:12)
    at ChildProcess.emit (events.js:223:5)
    at Process.ChildProcess._handle.onexit (internal/child_process.js:272:12) {
  errno: 'ENOMEM',
  code: 'ENOMEM',
  syscall: 'spawn'
}

Here the attempt to restart the process fails: the kernel outright refuses to spawn a new process, because there’ not enough memory for it.

The jest-worker library could be improved to report failures better. I don’t think that the terser-webpack-plugin itself can do anything better.

We are having exactly the same issue and cannot figure out. Thanks for clarifying what is going on there…

Unfortunately, we have no way to get the real value of available 😞 Your solution seems to be the only one right at the moment

@jsnajdr thanks for feedback, can you open an issue in jest repo?

@hinok Yes, will be great to documented that, I will do in near future, but you can be a champion 😄

@evilebottnawi I upgraded in my project to v2.3.3 but still had the same problem. Setting explicitly parallel: 2 resolved the issue.