webpack-dev-server: webpack-dev-server and JavaScript heap out of memory

  • Operating System:macOS
  • Node Version:v8.9.4
  • NPM Version:5.6.0
  • webpack Version:3.6.0
  • webpack-dev-server Version:2.9.1
  • [ x] This is a bug
  • This is a modification request

Code

  // package.json
{"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js"}
  // additional code, remove if not needed.

Expected Behavior

The normal operation

Actual Behavior

FATAL ERROR :CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

For Bugs; How can we reproduce the behavior?

could you tell me how to set Node’s option(node --max_old_space_size=4096) for webpack-dev-server

For Features; What is the motivation and/or use-case for the feature?

thanks

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 3
  • Comments: 42 (4 by maintainers)

Most upvoted comments

Hi, you should ask questions like this in stackoverflow. To answer your question you can run it like this node --max-old-space-size=8192 node_modules/webpack-dev-server/bin/webpack-dev-server.js

@B3zo0 I don`t think increase the max-old-space-size is a good solution, even though I have not better solution

This is still happening all the time for me

Why exactly is this closed??

While the OPs question was answered, I second @norfish. Isn’t there an underlying issue of a memory leak? it seems that increasing the memory as suggested only make the issue less likely to happen rather than eliminating the issue. Would that be fair to say?

I solved this problem by node --max-old-space-size=4096 "%~dp0\..\webpack-dev-server\bin\webpack-dev-server.js" %* in node_modules/.bin/webpack-dev-sever.cmd

Seeing this as well. It always compiles at least once without running out of memory, but crashes on the second or third recompile after a file changes. I tried a lot of things to fix it but the only thing that worked was setting:

optimization: {
  splitChunks: {
    chunks: 'all',
  },
},

I’m at a loss as to why this works, but I suspect it may have something to do with creating more small common chunks that do not change between recompiles?

Agreed with above. I recently upgraded from webpack 3 to 4 and started running into this issue fairly often, whereas before I never encountered this at all.

Previously, we were on webpack 3.12.0 and webpack-dev-server 2.11.3, and now we’re on webpack 4.22.0 and webpack-dev-server 3.1.10. Happy to provide more debugging info if needed.

Operating System: Ubuntu 18.04 Node Version: 9.11.2 NPM Version: 5.6.0

I’m also getting this issue recently after my project started to increase in size.

    "webpack": "^4.19.0",
    "webpack-cli": "^3.1.2",
    "webpack-dev-server": "^3.1.9"

[2056:0000027C4A2D4C20]  6824619 ms: Mark-sweep 1395.6 (1484.9) -> 1395.6 (1484.9) MB, 773.9 / 0.1 ms  allocation failure GC in old space requested
[2056:0000027C4A2D4C20]  6825512 ms: Mark-sweep 1395.6 (1484.9) -> 1395.5 (1446.9) MB, 892.6 / 0.0 ms  last resort GC in old space requested
[2056:0000027C4A2D4C20]  6826306 ms: Mark-sweep 1395.5 (1446.9) -> 1395.5 (1446.4) MB, 793.6 / 0.0 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0000028DFEFA5EE1 <JSObject>
    1: fromString(aka fromString) [buffer.js:~298] [pc=00000351776D37DE](this=000000F13BB82311 <undefined>,string=0000029FCC524FE1 <Very long string[3161153]>,encoding=0000028DFEFB5CE1 <String[4]: utf8>)
    3: from [buffer.js:177] [bytecode=000003AE66D34B71 offset=11](this=000000C6A96361B1 <JSFunction Buffer (sfi = 000002576A582849)>,value=0000029FCC524FE1 <Very long string[3161153]>,encoding...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node_module_register
 2: v8::internal::FatalProcessOutOfMemory
 3: v8::internal::FatalProcessOutOfMemory
 4: v8::internal::Factory::NewRawTwoByteString
 5: v8::internal::Smi::SmiPrint
 6: v8::internal::AllocationSpaceName
 7: v8::String::WriteUtf8
 8: v8::internal::PagedSpace::SetUp
 9: node::Buffer::New
10: node::Buffer::New
11: 000003517604DBC6```

I’ll second this, I have a project where even with 4GB of memory allocated it dies at least twice a day with this error.

This ran fine for weeks at a time without restarted the dev server on webpack 3. Our code didn’t change between working and not. There’s a memory issue in webpack-dev-server and/or webpack 4.

Edit To help with debugging, here’s some version information:

Working Stack:

  • webpack: v3.10.0
  • webpack-dev-server: v2.11.1

Broken Stack:

  • webpack: v4.20.2
  • webpack-dev-server: v3.1.9

I was wrong about the caching plugin helping out. It doesn’t.

I had to give up on webpack-dev-server because it crashed on the first code change every single time. So I changed to just using webpack —watch with the caching plugin and things are super fast and no memory leaks.

So I think you guys are looking in the wrong place by saying this leak is a leak in webpack’s watch code. It has been running for hours non stop without any leaks.

So, unfortunately, I’m not sure this is a webpack-dev-server issue. This happens with regular webpack in watch mode, or even using webpack-nano and webpack-plugin-server.

This is seeming more and more like a core webpack issue.

I am struggling with this issue. Nothing helps. Can someone help me out on this?

<— Last few GCs —>

[17208:0000020B4EB70F20] 1184996 ms: Scavenge 3365.3 (4162.0) -> 3364.3 (4162.5) MB, 10.8 / 0.0 ms (average mu = 0.164, current mu = 0.189) allocation failure [17208:0000020B4EB70F20] 1185019 ms: Scavenge 3366.8 (4163.0) -> 3366.0 (4163.5) MB, 10.5 / 0.0 ms (average mu = 0.164, current mu = 0.189) allocation failure [17208:0000020B4EB70F20] 1185036 ms: Scavenge 3367.7 (4163.5) -> 3366.9 (4164.0) MB, 9.7 / 0.0 ms (average mu = 0.164, current mu = 0.189) allocation failure

<— JS stacktrace —>

==== JS stack trace =========================================

0: ExitFrame [pc: 0000016F06950481]

Security context: 0x023dcff9d9d1 <JSObject> 1: set [0000023DCFF90E29](this=0x022843ae3df1 <Map map = 000000A62F483EC1>,0x00fc22082d79 <DependenciesBlock map = 000001714723CBD1>,4096) 2: processDependenciesBlocksForChunkGroups [000003C1DC7AD841] [D:\TCS\Projects\AzureCXP\Git\AzureCXP-Eng-UX\src\node_modules\webpack\lib\Compilation.js:~1514] [pc=0000016F07884402](this=0x024f2d6ba8f9 <Tapable m…

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 00007FF7B12BD7AA v8::internal::GCIdleTimeHandler::GCIdleTimeHandler+4618 2: 00007FF7B126B736 uv_loop_fork+86646 3: 00007FF7B126C1FD uv_loop_fork+89405 4: 00007FF7B169454E v8::internal::FatalProcessOutOfMemory+798 5: 00007FF7B1694487 v8::internal::FatalProcessOutOfMemory+599 6: 00007FF7B1747F64 v8::internal::Heap::RootIsImmortalImmovable+14068 7: 00007FF7B173DD72 v8::internal::Heap::CollectGarbage+7234 8: 00007FF7B173C588 v8::internal::Heap::CollectGarbage+1112 9: 00007FF7B1745EB7 v8::internal::Heap::RootIsImmortalImmovable+5703 10: 00007FF7B1745F36 v8::internal::Heap::RootIsImmortalImmovable+5830 11: 00007FF7B187DC6D v8::internal::Factory::AllocateRawArray+61 12: 00007FF7B187E602 v8::internal::Factory::NewFixedArrayWithFiller+66 13: 00007FF7B18C52DE v8::internal::wasm::AsmType::Void+86510 14: 00007FF7B18C599D v8::internal::wasm::AsmType::Void+88237 15: 00007FF7B194F6B4 v8::internal::StoreBuffer::StoreBufferOverflow+123924 16: 0000016F06950481 error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command. error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

@sativ01 as I mentioned in the part that you quoted, I am using webpack --watch with the caching plugin instead of WDS.

All i did was take my release version of the webpack config and and change: mode: "production", to

mode: 'development',
devtool: 'source-map',
watch: true,
watchOptions: {
    aggregateTimeout: 1500,
    ignored: ['node_modules']
},

this is the watch config. Then I added the caching plugin. The caching plugin is in my common file for my webpack config. I added this to the plugins array:

new HardSourceWebpackPlugin()

That’s it. From there it worked great for me. No memory leaks. It detects and rebuilds quickly. The one thing I would like to do better in my setup is to have the notifier plugin work properly every time watch detects a change and builds. Right now it only notifies me after the first build

Gotcha, can confirm it persists after updating as well.

    "webpack": "^4.26.1",
    "webpack-cli": "^3.1.2",
    "webpack-dev-server": "^3.1.10"

I’ve had luck reducing the memory usage quite a bit by replacing any call to [contenthash] with [chunkhash]. In my case it was only used by the mini-css-extract-plugin coming from create-react-app’s defaults. This is in addition to { splitChunks: { chunks: 'all' } }

Ie: 'static/css/[name].[contenthash:8].css' -> 'static/css/[name].[chunkhash:8].css'

@alexander-akait I still have no reproducible example but I think I can already tell that [in my case at least and I assume things are similar for many others] that the issue is not a memory leak but a “cache leak”.

The memory stays stable and is super clean but the cache goes berserk. We have next js project that persists cache on the disk and the pak files are close to 200MB. In there are emotion strings that have a line length of > 22000 (22k) characters. And those files keep increasing.

I am not sure but this may be related

https://github.com/vercel/next.js/issues/30330

and @sokra already had a fix there

I can try, I am getting this error while working on a child compiler thing, so that is why I think this is a hot candidate. I’m pretty swamped right now, I will try not to forget to create the example

Same issue, I dont know why it is even closed in the first place.

Facing the same issue

{ splitChunks: { chunks: "all" } } and chunkhash have been successful for me in increasing the time I have before this becomes a problem, but it still does eventually. With the dev server running, with each change my rebuild time gets about a second longer than the previous one, before crashing at about 50 seconds.

Looking through the in-memory files at localhost:8080/webpack-dev-server, I can see that it’s accumulated bundle after bundle, even with CleanWebpackPlugin (this is for a site that’s supposed to have just one bundle):

image

I’ve had some success just not using any pseudorandom hash names, and instead using something deterministic that will definitely be overwritten when the bundle is rebuilt, like bundle.[name].js. I’ll probably slap a NODE_ENV check in there to swap that out for a content hash for production builds.

Please use latest terser-webpack-plugin version