hermes: Memory leak when using fetch requests in react native
Bug Description
https://github.com/facebook/react-native/issues/39100
Issue above is opened on react-native repo but after investigation I made I can confirm that it is related definitely to hermes. On react-native 0.69 there is no memory leak when hermes is turned off.
I’ve rechecked it on latest version and behaviour is the same - memory leak with turned on hermes, proper ram cleaning when hermes is turned on. Provided demo in the issue above is for the newer react native version.
- I have run
gradle cleanand confirmed this bug does not occur with JSC
Hermes version: 0.11 React Native version (if any): 0.69, 0.72.5 OS version (if any): Platform (most likely one of arm64-v8a, armeabi-v7a, x86, x86_64):
Steps To Reproduce
To check example with memory leak:
- Open perf monitor
- Press start button
- Check ram usage
To check example without memory leak:
- Add
:hermes_enabled => falsein Podfile - Do steps 1-3 from above
code example: https://github.com/clemensmol/rn-fetch-memoryleak
The Expected Behavior
No memory leak when doing fetch requests
About this issue
- Original URL
- State: closed
- Created 9 months ago
- Comments: 15 (8 by maintainers)
Commits related to this issue
- Mitigate WeakMap cycle leak Summary: Currently, every `set` operation on a `WeakMap` results in a write to `self->valueStorage_`, even if the storage was not resized. This means that a write barrier ... — committed to facebook/hermes by neildhar 8 months ago
- Cherry pick e7b2abefabb6a9671e1d30d7af08cd1f32c9a670 Mitigate WeakMap cycle leak Summary: Currently, every `set` operation on a `WeakMap` results in a write to `self->valueStorage_`, even if the sto... — committed to vmoroz/hermes-windows by vmoroz 8 months ago
- Fix Hermes GC memory leaks (#169) * Cherry pick e7b2abefabb6a9671e1d30d7af08cd1f32c9a670 Mitigate WeakMap cycle leak Summary: Currently, every `set` operation on a `WeakMap` results in a write... — committed to microsoft/hermes-windows by vmoroz 8 months ago
- Mitigate WeakMap cycle leak Summary: Original Author: neildhar@meta.com Original Git: e7b2abefabb6a9671e1d30d7af08cd1f32c9a670 Original Reviewed By: tmikov Original Revision: D51000231 Currently, ev... — committed to facebook/hermes by avp 5 months ago
- Mitigate WeakMap cycle leak Summary: Currently, every `set` operation on a `WeakMap` results in a write to `self->valueStorage_`, even if the storage was not resized. This means that a write barrier ... — committed to facebook/hermes by neildhar 8 months ago
Hey @Vadko, thanks for preparing this repro. I have been trying to reproduce the issue, and do see a steep sustained increase in memory consumption without the call to
globalThis.gc()on both Android and iOS. (I modified your repro to invoke fetch every 50ms instead of 500ms to make the issue more severe)I then modified your example to periodically invoke the GC (every 100 ms). With that change, the Android memory increase seemed to be resolved. On iOS, I still observed a very gradual leak. Looking at it under XCode’s allocation profiler, the primary culprit is the
CFString (immutable)class of allocations although I also see theNSURLInternalsize increasing over time. I don’t know what the source of these allocations are.However, that gradual leak also appears under JSC, which means that isn’t a Hermes specific issue.
So to summarise some inferences:
globalThis.gc().Hi!
We’re facing a similar issue in one of our apps with low memory usage constraints, and can indeed reproduce with the example that @vadko provided 👍
I’ve changed the example to make more API calls (full code here)
Essentially, I:
Im still having a hard time understanding why Hermes isn’t garbage collecting as expected. Garbage collection does seem to be happening as shown in the details below, but is not fully collecting everything.
The fact that running
gcmanually fixes the RAM increase makes it seem to me that there’s no memory leak, and perhaps no issue on React Nativefetchside?Plus, as shown in the details below, in debug, there seems to be a discrepancy by the JS size reported by the Hermes debugger (which increases as expected) and the JS size reported by
HermesInternal.getInstrumentedStats()(which doesn’t)Would love some help investigating further though 🙏 It also seems linked to https://github.com/facebook/hermes/issues/982
Experiment details
Device
Measures shown below are taken with a Pixel 8 Android 14, however we’ve also tested with Hades incremental on a Android 9 device
Here’s the result of
HermesInternal.getRuntimeProperties()for the Pixel:1. First, let’s start the app
Metrics gathered from
adb shell dumpsys meminfoandHermesInternal.getInstrumentedStats()This is the full result of
HermesInternal.getInstrumentedStats()2. Then let’s make a few 100s API calls and stop after a while
RAM rises up progressively. Note that it would go up until crashing the app or receiving a memory pressure event and finally running Hades Garbage collection.
The number of GC triggered also went up on JS side, which is surprising! Garbage collection does seems to happen which explains why js allocated bytes stays low. But is the stat correct? As shown at the end of the report, there’s a discrepancy with the one reported in the Hermes debugger for instance
On Java side, the garbage collector is also trying to free memory but can’t (and increases heap size), probably because the JS side holds the instances.
3. Wait a few minutes
In case garbage collection happens, but nope, no changes The number of GC on JS side is the same.
4. Trigger manually JS garbage collection
Running in debug mode
Making the same experiment in debug and checking out the Hermes debugger in Flipper, we can see the heap snapshot increases in size, while
HermesInternal.getInstrumentedStats()doesn’t seem to report it.Biggest difference between my 2 heap snapshots are the 78k strings allocated, corresponding to the request payload:
RAM used: 217MB
RAM used 490MB
Got it. We will try to reproduce it internally. Thanks for creating the repro!
@Almouro Thank you for sharing these detailed findings. They exhibit very unusual memory behaviour, and were exactly the evidence I needed for a full investigation of what is going on.
In particular, the heap snapshots showing that Hermes is allocating and retaining very large strings that are only freed by explicit calls to the GC suggested that something was affecting the GC’s ability to collect the large strings.
I’ve spent some time investigating this and I believe the root cause of the behaviour you’re observing is a very subtle interaction between
WeakMaps and the GC. I have a relatively simple mitigation that addresses the collection of those strings, and improves memory consumption dramatically in your repro.That said, this isn’t a general solution for the problem in this issue. The underlying issue with untracked native resources remains, and the repro in the initial report in this issue still stands.
Objects in Hermes are around 48 bytes, and the 250KB was based on the size of the blob being downloaded. This is just a rough estimate, for the purpose of discussion.
It certainly can, but only if the application needs a lot of working memory. In this case, the GC was overestimating the amount of working memory the application needed because it couldn’t collect the large strings.
Closing this since the WeakMap leak has been fixed, and we have added an API for tracking external memory in aae2c4260781178d7b2ca169811b3bfca9f924d2. That allows you to inform the GC that a given object retains some native memory.