webpack: Webpack4 has memory leak in development mode

Do you want to request a feature or report a bug? bug

What is the current behavior? The memory increased when running webpack4 in development, and the memory didn’t release. So there was memory leak. Then node will throw error JavaScript heap out of memory I saw the heapsnapshot, there was many repeated String Objects which were compiled by webpack. If the current behavior is a bug, please provide the steps to reproduce. run webpack development mode with , update the business code and waiting rebuild, then update the code…you will see the increasing memory.

What is the expected behavior? Clear memory in time. If this is a feature request, what is motivation or use case for changing the behavior?

Please mention other relevant information such as the browser version, Node.js version, webpack version, and Operating System. webpack@4.2.0

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 167
  • Comments: 156 (35 by maintainers)

Commits related to this issue

Most upvoted comments

Got the same issue when developing with hot reload.

Ok this leak is a tricky one. Here is what happens:

From the devtools (when using the memory profiler you can see leaked Compilation objects. They are held by this chain:

image

Some module.exports has a webpack property holding the Compilation object. The webpack property is this one:

image

https://github.com/postcss/postcss-loader/blob/928d5c41d5e2bdec130b3c0899760889466ae7bd/lib/index.js#L93

It’s added to the postcss context.

The context should be temporal, but it’s actually Object.assigned to the object returned by cosmiconfig:

image

https://github.com/michael-ciniawsky/postcss-load-config/blob/348fb5b62fcd01f93cfc768859a82458e093d035/index.js#L59

cosmiconfig returns the module.exports for the .postcssrc.js file. But why is this Module still hold.

cosmiconfig uses require-to-string which does the following to load the file:

image

https://github.com/floatdrop/require-from-string/blob/b81e995c6ff82fbf71d9ee7a9990b10794fecb98/index.js#L24-L29

This doesn’t add the module to the require.cache, but it still adding the module as child to the parent module. That’s the leak.

Some things could have done better higher in the chain, but they are not the root cause.

  • webpack could remove the Compilation reference from the loader context once the loader finish compiling. This would decrease the amount of leaked memory a lot.
  • postcss-load-config could assign the context to a new object like assign({}, config, ctx) instead of modifying the object. This would decrease the amount of leaked memory.

How to fix it?

It’s actually already fixed in require-to-string version 2 via this commit:

image

https://github.com/floatdrop/require-from-string/commit/ca2b81f56cc6d5480a25ca8b2be5887de2dfb53c

So cosmiconfig can update it’s dependency, which it already has in version 3 via this commit:

https://github.com/davidtheclark/cosmiconfig/commit/3846b1186376c1133e00ffb576434d05d8eb6a01

They actually stating that this fixes a memory leak in require-to-string.

So postcss-load-config has to update its cosmiconfig dependency. cc @michael-ciniawsky

Note that cosmiconfig is already at version 5, but this seem to be unusable because of this bug: https://github.com/davidtheclark/cosmiconfig/issues/148. This would break watching the config file.

They are proposing to use https://github.com/sindresorhus/clear-module but this package has the same memory leak bug as require-to-string had. cc @sindresorhus

So my proposal is to update cosmiconfig only to ^3.1.0.


Here is a little guide how to find memory leaks in webpack.

  • node.js version 10 is needed.
  • Attach the debugger/devtools. You could use process._debugProcess to attach to a running node process.
  • Change the file a couple of times to everything is in the cache.
  • Take a Heap snapshot. (1)
  • Change the file two times, back to the prev state.
  • Take a Heap snapshot. (2)
  • Change the file two times, back to the prev state.
  • Take a Heap snapshot. (3)
  • In the devtools select snapshot 3.
  • Choose Objects allocated between Snapshot 1 and Snapshot 2
  • Filter for Compilation
  • Open Compilation group.
  • Select one of the Compilation objects
  • Take a look at the Retainer list. It’s automatically expanded in the correct way.

image

I just upgraded from 3.10.0 to 4.5.0 and I’m seeing this sporadically in development:

i 「wdm」: Compiling...
webpack building...

<--- Last few GCs --->

[9284:000000000028FA40]  1100280 ms: Mark-sweep 1385.1 (1411.9) -> 1385.1 (1411.9
) MB, 292.3 / 0.1 ms  allocation failure GC in old space requested
[9284:000000000028FA40]  1100624 ms: Mark-sweep 1385.1 (1411.9) -> 1385.1 (1407.9
) MB, 343.1 / 0.1 ms  last resort GC in old space requested
[9284:000000000028FA40]  1100867 ms: Mark-sweep 1385.1 (1407.9) -> 1385.1 (1407.9
) MB, 243.6 / 0.1 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 000003025D1257C1 <JSObject>
    0: builtin exit frame: lastIndexOf(this=000003B48B1F6031 <Very long string[60
6700]>,00000315AAE8BF89 <String[1]\: \n>)

    1: /* anonymous */(aka /* anonymous */) [C:\www\node\poject\node_modu
les\webpack-sources\node_modules\source-list-map\lib\SourceListMap.js:~100] [pc=0
000029834AF4F94](this=0000006331D822D1 <undefined>,sln=00000157ABFAF271 <SourceNo
de map = 0000026155AF2B41>)
 ...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memor
y
 1: node_module_register
 2: v8::internal::FatalProcessOutOfMemory
 3: v8::internal::FatalProcessOutOfMemory
 4: v8::internal::Factory::NewRawTwoByteString
 5: v8::internal::Smi::SmiPrint
 6: v8::internal::StackGuard::HandleInterrupts
 7: v8::internal::SlicedString::SlicedStringGet
 8: v8_inspector::protocol::Debugger::API::SearchMatch::fromJSONString
 9: v8_inspector::protocol::Debugger::API::SearchMatch::fromJSONString
10: 0000029833B86B21

The only change I made to my config file was adding mode: 'development'.

Отличное решение. Так держать.

if you try to edit the same file over and over again, does the memory footprint keep increasing as webpack keeps rebuilding?

If I keep editing a single file and do a save after edit, webpack increases memory by ~30MB. The project is https://github.com/Microsoft/language-server-protocol-inspector if you are interested.

I’m linking to a dev version of vue-cli that’s using webpack 4.15.1:

image

Here is a gif. As you see, all I’m doing is

  • Delete a line
  • Save
  • Watch webpack memory usage increase by 10-30MB
  • Bring that line back
  • Watch webpack memory usage increase by another 10-30MB

mem

Мда, то есть тебе не достаточно этих репортов? 90% которых ссылаются на неверно реализованное кеширование дев сервера… И да, это видимо главная проблема, которая до сих пор не исправлена, but issue closed, np,

@CarterLi Can you create minimum reproducible test repo?

@afwn90cj93201nixr2e1re Я настоятельнр рекомендую использовать анлийский для общения. Проблемы с переполнением памяти dev сервера известна при долгой работе и открыта в репозитории dev сервера, нет никакого смысла в дубликате и ее не возможно исправить на стороне webpack. Репортов недостаточно? Половина из них просто спам без какой-либо информации, я не могу починить что-то где-то не зная в чем проблема, извините я не джин или маг. Остальная половина или уже не актулально и устарело, или в сторонних плагинах, который мы тоже не можем исправить. Хочешь быстрого решения - создай хорошо описанную проблему с примером, хочешь еще быстрее - помоги в решении, это открое программное обеспечение.

any progress on this one?

@evilebottnawi as soon as we opt in into code splitting, we may have a memory leak.

I finally have a small project that launches webpack watch and a “scrambler” (something that edits your entry points for you automatically) side by side => https://github.com/Sinewyk/webpack_leak_6929

By editing the node --max_old_space_size=X parameter in the package.json from 50 to 75 we get a OOM in ~15 to ~60 seconds.

Install and yarn test and you may use something like GENERATE_HEAP_DUMP=true HEAPDUMP_INTERVAL=5000 yarn test to see in action the memory leak of the code splitting.

Finally I understand why I saw tons of SyncBailHook and stuff in the previous dumps, it’s how all the various Chunks Optimization are registered apparently. And it leaks the compilation objects between runs.

@afwn90cj93201nixr2e1re Нет ничего сложного открыть новую проблему с воспроизводимым примером ошибки/зависания/утечки и тд. Используйте только анлийский язык в будущем.

I found two leaks, both fixed in the PR referenced above.

There is still one known leak, but I can’t do much about it. The in-memory filesystem piles up files when they contain a hash, i. e. 656cd54965df5bcf669a.hot-update.json

I returned to a git branch after a two week vacation where I then installed the latest versions of Webpack and css-loader only to experience this error. I was able to prevent it by ensuring that I had this in my config:

optimization: {
    splitChunks: {
        chunks: 'all'
    }
}

I had previously been omitting the entire optimization property when building for karma.

postcss-loader v2.1.6 🎉

rm -rf [package-lock.json] node_modules && npm cache clean -f && npm i

@salemhilal That’s not ideal, of course. It is not unreasonable to expect webpack-dev-middleware to prune no longer needed files when their hash changes.

I think we can fix this behavior for next major release

Something to note here: If you are using a watch mode or dev-server, webpack opts to using an in-memory file system. This means, that anything generating a deterministic results [hashed file names] will persist in the memory file system forever until watch mode is killed and started again.

We need minimum reproducible repo, without this it makes no sense to write about the fact that you have a problem. Thanks!

I’ll take a look…

I close the issue because there are many different problems and it is not possible to track them, many problems have been fixed, if you encounter a problem please create a new issue with a reproducible test repo, thanks

I did some more investigation on my leak involving dojo-webpack-plugin (see the updated test case) and I found the root cause of the leak.

Long story short: it is probably not webpack’s fault. The gist of what dojo-webpack-plugin is doing is the following.

compiler.hooks.compilation.tap("test", compilation => {
  const context = Object.create(this, {
    compilation: { value: compilation },
  });
  let fn = (function() {
    console.log("Making compilation"); 
  }).bind(context);
  compiler.hooks.make.tap("test", fn);
});

As you can see, it is registering a Compiler hook that internally holds a reference to each Compilation object. This leaks all compilations via the tapable hook. I also tried the equivalent webpack 3 code and it leaks the same.

I will be reporting this to the plugin author since I guess this is not something a plugin should do, isn’t it?

After i’ve commented out devtool: 'inline-source-map', it worked without memory exception, obviously also without source maps and normal debugging.

After i’ve tried to add TerserPlugin to optimization (or plugins) section, and it fails again, so obvious that there is a problem with source maps library node_modules\source-map\lib\util.js (Terser) node_modules\source-map\lib\source-node.js ( devtool: ‘inline-source-map’ )

so i stick with inline-cheap-source-map option, at least it makes hot reload possible.

One workaround is to increase node’s memory allocation. You can do this by setting the node flag --max_old_space_size when calling node, or setting the default node flags environment variable:

export NODE_OPTIONS="--max_old_space_size=4096"

project git: https://github.com/zD98/webpack-memory-test npm start Because the repo is very small, so multiple operations that change code quickly can increase the memory, and it would not be released.

I confirm that "devtool: ‘source-map’ " forces huge memory leak. But it is not the only problem: "devtool: ‘none’ " helps, but memory still leaks slowly.

I’ve tried what @michael-ciniawsky suggested but still getting the error.

Seems like it’s happening at the same place as ElvisKang’s case

Anybody still having this?

<--- Last few GCs --->

[69494:0x103000000]  3012592 ms: Mark-sweep 1392.2 (1446.2) -> 1392.1 (1446.2) MB, 338.9 / 0.0 ms  allocation failure GC in old space requested
[69494:0x103000000]  3012896 ms: Mark-sweep 1392.1 (1446.2) -> 1392.1 (1415.2) MB, 303.1 / 0.0 ms  last resort GC in old space requested
[69494:0x103000000]  3013227 ms: Mark-sweep 1392.1 (1415.2) -> 1392.1 (1415.2) MB, 330.9 / 0.0 ms  last resort GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x1552f98a55e9 <JSObject>
    1: /* anonymous */(aka /* anonymous */) [<HOME_DIR>/<PROJECT_ROOT>/node_modules/webpack/lib/Stats.js:~544] [pc=0xbcb6ffd5750](this=0x1552a82822d1 <undefined>,reason=0x155294bf5431 <ModuleReason map = 0x155242b8eb19>)
    2: arguments adaptor frame: 3->1
    3: map(this=0x155266219dd1 <JSArray[1688]>)
    4: fnModule(aka fnModule) [<HOME_DIR>/<PROJECT_ROOT>/node_modu...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: node::Abort() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 2: node::FatalTryCatch::~FatalTryCatch() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 3: v8::internal::V8::FatalProcessOutOfMemory(char const*, bool) [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 4: v8::internal::Factory::NewCodeRaw(int, bool) [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 5: v8::internal::Factory::NewCode(v8::internal::CodeDesc const&, unsigned int, v8::internal::Handle<v8::internal::Object>, bool, int) [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 6: v8::internal::CodeGenerator::MakeCodeEpilogue(v8::internal::TurboAssembler*, v8::internal::EhFrameWriter*, v8::internal::CompilationInfo*, v8::internal::Handle<v8::internal::Object>) [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 7: v8::internal::compiler::CodeGenerator::FinalizeCode() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 8: v8::internal::compiler::PipelineImpl::FinalizeCode() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
 9: v8::internal::compiler::PipelineCompilationJob::FinalizeJobImpl() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
10: v8::internal::Compiler::FinalizeCompilationJob(v8::internal::CompilationJob*) [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
11: v8::internal::OptimizingCompileDispatcher::InstallOptimizedFunctions() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
12: v8::internal::StackGuard::HandleInterrupts() [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
13: v8::internal::Runtime_StackGuard(int, v8::internal::Object**, v8::internal::Isolate*) [<HOME_DIR>/.nvm/versions/node/v9.11.2/bin/node]
14: 0xbcb6da842fd
error Command failed with signal "SIGABRT".

Take a look at https://github.com/Sinewyk/webpack_leak_6929 for a practical example on how to quickly generate heapdumps and OOM errors (you need to reduce your use case to what truly breaks it). Copy/paste your config and check it breaks, and remove everything one by one until it doesn’t break anymore, isolate the problem.

Then take a look at https://github.com/webpack/webpack/issues/6929#issuecomment-403441611 for a technical on how to dissect heap dump and help find the leak.

Once you have the heapdumps you can probably also just check out the documentation of the chrome dev tools for further help.

edit: we know (and accept) there’s a small leak when using HMR, so this thread is about “unreasonable” (buggy) leaks. Not “known” leaks (like the example after 13 hours of coding, just restart when you go to pee or during lunch break and you’re good to go).

webpack 4.26.0 node 11.0.0

After dozens of hours of continuous work it crash with this error:

Child html-webpack-plugin for "index.html":
     1 asset
    Entrypoint undefined = index.html
       4 modules
 95% emitting HtmlWebpackPlugin                                              
<--- Last few GCs --->

[65188:0x103800000] 38818979 ms: Mark-sweep 1385.2 (1411.5) -> 1385.2 (1412.0) MB, 536.8 / 0.0 ms  (average mu = 0.124, current mu = 0.010) allocation failure GC in old space requested
[65188:0x103800000] 38819308 ms: Mark-sweep 1385.2 (1412.0) -> 1385.2 (1412.0) MB, 328.8 / 0.0 ms  (average mu = 0.080, current mu = 0.000) allocation failure GC in old space requested


<--- JS stacktrace --->

==== JS stack trace =========================================

    0: ExitFrame [pc: 0xfefa48cfb7d]
    1: ConstructFrame [pc: 0xfefa4889e66]
    2: StubFrame [pc: 0xfefa5724e60]
Security context: 0x32074a41d969 <JSObject>
    3: new Script(aka Script) [0x320794a827d1] [vm.js:80] [bytecode=0x3207ed2ba891 offset=375](this=0x3207a5982691 <the_hole>,0x3207b5482309 <Very long string[565045]>,0x3207b54ac9a9 <Object map = 0x3207613d46e1>)
    4: ConstructFrame [pc: 0xfefa4889d53]
    5: StubFrame [pc: 0xfe...

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0x10003a9d9 node::Abort() [/usr/local/bin/node]
 2: 0x10003abe4 node::FatalTryCatch::~FatalTryCatch() [/usr/local/bin/node]
 3: 0x10019ed17 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 4: 0x10019ecb4 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [/usr/local/bin/node]
 5: 0x1005a5882 v8::internal::Heap::FatalProcessOutOfMemory(char const*) [/usr/local/bin/node]
 6: 0x1005a4838 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [/usr/local/bin/node]
 7: 0x1005a2443 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/local/bin/node]
 8: 0x1005a2925 v8::internal::Heap::CollectAllAvailableGarbage(v8::internal::GarbageCollectionReason) [/usr/local/bin/node]
 9: 0x1005aed91 v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationSpace, v8::internal::AllocationAlignment) [/usr/local/bin/node]
10: 0x1005814ea v8::internal::Factory::AllocateRawOneByteInternalizedString(int, unsigned int) [/usr/local/bin/node]
11: 0x10058187a v8::internal::Factory::NewOneByteInternalizedString(v8::internal::Vector<unsigned char const>, unsigned int) [/usr/local/bin/node]
12: 0x1006fa8e4 v8::internal::StringTable::AddKeyNoResize(v8::internal::Isolate*, v8::internal::StringTableKey*) [/usr/local/bin/node]
13: 0x1001f2b2b v8::internal::AstValueFactory::Internalize(v8::internal::Isolate*) [/usr/local/bin/node]
14: 0x1002c1fff v8::internal::(anonymous namespace)::FinalizeTopLevel(v8::internal::ParseInfo*, v8::internal::Isolate*, v8::internal::UnoptimizedCompilationJob*, std::__1::forward_list<std::__1::unique_ptr<v8::internal::UnoptimizedCompilationJob, std::__1::default_delete<v8::internal::UnoptimizedCompilationJob> >, std::__1::allocator<std::__1::unique_ptr<v8::internal::UnoptimizedCompilationJob, std::__1::default_delete<v8::internal::UnoptimizedCompilationJob> > > >*) [/usr/local/bin/node]
15: 0x1002bfd4f v8::internal::(anonymous namespace)::CompileToplevel(v8::internal::ParseInfo*, v8::internal::Isolate*) [/usr/local/bin/node]
16: 0x1002c0ed5 v8::internal::Compiler::GetSharedFunctionInfoForScript(v8::internal::Isolate*, v8::internal::Handle<v8::internal::String>, v8::internal::Compiler::ScriptDetails const&, v8::ScriptOriginOptions, v8::Extension*, v8::internal::ScriptData*, v8::ScriptCompiler::CompileOptions, v8::ScriptCompiler::NoCacheReason, v8::internal::NativesFlag) [/usr/local/bin/node]
17: 0x1001a8416 v8::ScriptCompiler::CompileUnboundInternal(v8::Isolate*, v8::ScriptCompiler::Source*, v8::ScriptCompiler::CompileOptions, v8::ScriptCompiler::NoCacheReason) [/usr/local/bin/node]
18: 0x10005ee47 node::contextify::ContextifyScript::New(v8::FunctionCallbackInfo<v8::Value> const&) [/usr/local/bin/node]
19: 0x100226d47 v8::internal::FunctionCallbackArguments::Call(v8::internal::CallHandlerInfo*) [/usr/local/bin/node]
20: 0x100225ff8 v8::internal::MaybeHandle<v8::internal::Object> v8::internal::(anonymous namespace)::HandleApiCallHelper<true>(v8::internal::Isolate*, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::HeapObject>, v8::internal::Handle<v8::internal::FunctionTemplateInfo>, v8::internal::Handle<v8::internal::Object>, v8::internal::BuiltinArguments) [/usr/local/bin/node]
21: 0x1002259c0 v8::internal::Builtin_Impl_HandleApiCall(v8::internal::BuiltinArguments, v8::internal::Isolate*) [/usr/local/bin/node]
22: 0xfefa48cfb7d 
23: 0xfefa4889e66 
Abort trap: 6

We made this patch-package https://gist.github.com/sibelius/baf12454c371e9d6c728376c39d9f1e0

this will make just one bundle on development mode, making it much faster and without consuming a lot of memory

we also use WPS https://github.com/shellscape/webpack-plugin-serve

I kept having core dumps because node was running out of memory during development, so I digged a bit. With my configuration, the leaks were coming from two sources:

  1. When using favicons-webpack-plugin, for some reason, multiple instances of Compiler (and all the caches it references) are stored in memory and never destroyed every time a file changes. I’ll just remove this plugin during development (or maybe I’ll just find a replacement).

  2. When using [hash] substitutions in bundle filenames or chunk filenames, Webpack (without any plugin) is leaking memory because it stores each bundle in memory, indexed by its file name. I created a minimal repository to reproduce, please have a look. Removing [hash] substitutions during development fixed the issue for me.

I hope it can help someone. Cheers.

the problem is not with webpack-dev-server

we used to use another plugin to do the same as webpack-dev-server on this repo https://github.com/sibelius/webpack-debug, and the crash still happened when doing hmr

can we create a custom hmr strategy, and avoid the current one used by webpack?

I happens quite often if you include a large library such as Microsoft’s monaco-editor.

I think the problem is not exactly the assets, but the Compilation objects. AFAICT the idea of Compilation is that they should be needed only during a build/rebuild and disposed of shortly after.

In my case, multiple Compilation objects were being kept around, each one with its assets and everything else. The culprit was a custom plugin that was leaking compilations by registering hooks on compiler that internally kept a reference to the compilation.

The following is the essence of the problematic pattern.

compiler.hooks.compilation.tap("test", compilation => {
  compiler.hooks.make.tap("test", () => {
    // use compilation in some way
  });
});

Both hook functions are registered on the compiler object, but the inner one also has a closed reference to compilation. In this way, the short-lived compilation is retained in memory by the compiler object, which has the same lifetime of the Webpack watch-mode execution.

I do not really understand why the above pattern has emerged or become problematic only with version 4.

About fixing that, webpack may provide some kind of guard for cases where hooks to compiler are added while handling an hook that provides access to the compilation and warn the plugin developer in some way.

@zD98 Please create minimum reproducible test repo

@evictor Potentially, but as I understand it, it can be non-trivial to tell when things like autogenerated bundles containing split chunks are no longer needed, since these modules may only be named by hash. Or, what if your app is loaded in the browser, a file is edited which causes an async bundle to get renamed, but then in the browser, your code requests the old async chunk? It would error out unexpectedly because the requested bundle was pruned, which could seem more like a misconfiguration than expected behavior.

It could be a whole lot more effective to allow users to just opt out of using memory-fs, since disk space is much cheaper to consume than memory. The change is only a couple of lines.

Hey there guys, I’m not sure, but it seems that I found some kind of solution for somebody. I was faced with the same problem and decide to switch off optimization field inside the development-config. And it was solution for me.

...other 1st-level fields
optimization: {
        minimizer: [
            new UglifyJsPluginInstance({
                parallel: true,
                exclude: ['node_modules'],
            }),
        ],
        runtimeChunk: 'single',
        splitChunks: {
            chunks: 'all',
            maxInitialRequests: Infinity,
            minSize: 0,
            maxSize: 50000,
            minChunks: 2,
            cacheGroups: {
                vendor: {
                    name: 'vendor',
                    test: /\/node_modules\//,
                    name: packagesNamesHandler,
                },
            },
            automaticNameDelimiter: '-',
        },
        noEmitOnErrors: true,
        providedExports: false,
    },
   ...other 1st-level fields

So, I guess some of this code might lead to memory leaks. Note, that there is field which contains instance of uglifyjs-webpack-plugin which was noticed earlier as one of the causes of the leaks. I hope it will useful for somebody.

I made a very small repo that has this memory leak when using react-hot-loader

https://github.com/sibelius/webpack-debug

I’m finding a combination of hot reload with devtool: ‘source-map’ causes a memory leak that takes down webpack after a handful of code changes.

I got this almost everyday. I didn’t use vue.

same when I use development mode.

mode = "none" still blows up…I’m not sure mode is the issue.

btw most of them (90%) related to cache issue’s… 5% - to old version, 4% - to old plugins, 1% - other stuff.

We were experiencing similar heap out of memory issues after just 5-15 HMR reloads since our upgrade to Webpack 4.

We fixed it by first making sure we upgraded all loaders used by webpack, and switched out one plugin which seemed to have a memory leak when using it with webpack 4: we switched from https://github.com/jantimon/favicons-webpack-plugin to https://github.com/brunocodutra/webapp-webpack-plugin

Have you remove [chunkhash] or [hash] in filename of output option? like below,

output: {
		path: PATH.build,
		filename: '[name].js'
	},

Thanks @Sinewyk. The issue for me occurs after just 3/4 rebuilds - I wish it could last 13 hours! Also, I am not using HMR, so it wouldn’t seem related to that. The problem with reducing the build until it isolates the problem is that there is only so much I can remove before the build does not work at all - I start disabling things one by one, the problem continues, until so much is removed that the build no longer works at all. I did try installing and using headdumps, as in your minimal repro app, but it took too long to even run a single dev build that I couldn’t get to the stage where the watch would crash…

@jgcmarins I glad to be helpful.

@Neporotovskiy great! What do you mean by packagesNamesHandler? Thanks!

This function takes the context of each module allowed by the test field and performs some modulations of it for creating the name of separate chunk for this module. I saw this approach here https://hackernoon.com/the-100-correct-way-to-split-your-chunks-with-webpack-f8a9df5b7758. I’m not sure that it is 100% correct way to bundle splitting, but it was useful for my home-project.

I agree with @PlayMa256 , the memory-fs memory usage is growing much less than the process memory.

still having this memory leak issue after upgrading from v3.6.0 to v4.23.1

worker-loader cause node memory leak `<— Last few GCs —>

[2054:0x39e56e0] 54189 ms: Mark-sweep 1373.9 (1460.8) -> 1373.9 (1476.8) MB, 797.7 / 0.0 ms allocation failure GC in old space requested [2054:0x39e56e0] 55136 ms: Mark-sweep 1373.9 (1476.8) -> 1373.9 (1445.8) MB, 946.1 / 0.0 ms last resort GC in old space requested [2054:0x39e56e0] 55963 ms: Mark-sweep 1373.9 (1445.8) -> 1373.9 (1445.8) MB, 827.1 / 0.0 ms last resort GC in old space requested

<— JS stacktrace —>`

This issue seems to be present while developing with hot-reload. If I look at http://localhost:3000/webpack-dev-server, it keeps adding hot reload files to the memory and does not cleanup the old ones. After several hot reloads result is “out of memory”.

I’ve been getting this error a lot as well, happens mostly after quick css adjustments + saves.

From a vue-cli v3 project which uses webpack 4:

image

I also meet with this problem.I’m not sure whether it relate to ‘vue-loader’.Because it works normally without ‘vue-loader’ in one of my project.

“webpack”: “^3.6.0” The same existence

I was talking about the leak in this comment https://github.com/webpack/webpack/issues/6929#issuecomment-383591954 I’d suggest people use the Chrome tooling to trace down the leaks themselves it’s doable with some patience.

Ok, here is a minimal test case for the leak that I am experiencing.

I extracted the example from a much larger project that I am working on. The problem here seems caused by a third-party plugin, DojoWebpackPlugin, but in some snapshot I recall seeing other plugins that were causing the issue.

It appears that webpack is keeping Compilation instances around, probably because of how tapable hooks are used. I don’t know the webpack source code well enough to proceed further in the investigation. See my example README for details.

Guys please read https://github.com/webpack/webpack/issues/6929#issuecomment-386020396 again, we can’t solve this problem because in pure usage webpack no problems, i.e. problem with memory leak in some loader/plugin, please create minimum reproducible test repo if you want to solve problem fastly. Thanks!

After i’ve commented out devtool: 'inline-source-map', it worked without memory exception, obviously also without source maps and normal debugging.

After i’ve tried to add TerserPlugin to optimization (or plugins) section, and it fails again, so obvious that there is a problem with source maps library node_modules\source-map\lib\util.js (Terser) node_modules\source-map\lib\source-node.js ( devtool: ‘inline-source-map’ )

so i stick with inline-cheap-source-map option, at least it makes hot reload possible.

👆 since doing this, we had no more memory leaks, developing a whole day on a huge project, with steadily running webpack dev server. just to mention again…

Ps. And all that also without the split chunks plugin

Think that issue related only to chunk’s splitting interface/plugin, so, we should free old chunks which are not important after recompilation. To reproduce we should just multiplie save diff. changes, for example 5-6 changes in one second, memory gonna be freed by gc, but not at all, for example normal vue app with a lot of components gonna take near ~300mb, if we gonna save changes it’s gonna raise to 500(and cleared by gc, to 300mb again), if we gonna make to much changes it’s gonna stuck at first on 500mb, then raise to 600mb-1gb, and return to 500mb(not 300mb), so, i think that there’s only chunk splitting plugin issue.

Everyone concerned by this issue, are you using sass-loader ?

NB: Does your memory magically stop growing if you stop using sass-loader ?