sentry-javascript: [node][7.102.0] Suspicion of Memory Leak
Is there an existing issue for this?
- I have checked for existing issues https://github.com/getsentry/sentry-javascript/issues
- I have reviewed the documentation https://docs.sentry.io/
- I am using the latest SDK release https://github.com/getsentry/sentry-javascript/releases
How do you use Sentry?
Sentry Saas (sentry.io)
Which SDK are you using?
SDK Version
7.102.0
Framework Version
espress.js
Link to Sentry event
No response
SDK Setup
Sentry.init({
dsn: process.env.SENTRY_DNS,
release: 'n/a',
environment: process.env.NODE_ENV,
integrations: [
new Sentry.Integrations.Http({ tracing: true }),
new Sentry.Integrations.Express({
app,
methods: ['post', 'get', 'delete'],
}),
],
tracesSampleRate: 0,
});
Steps to Reproduce
- add
app.use(Sentry.Handlers.requestHandler()); - run request
- check profile
Expected Result
n/a
Actual Result
I would like to point out that the problem is not one-to-one. That is, not every request gets stuck in memory. After dozens of requests, only one object can be added, and sometimes more. I didn’t find any particular pattern.
You may notice that multiple hubs are created, and the data (in locals) is not removed from the response object (and the response object is not deleted either).
if remove app.use(Sentry.Handlers.requestHandler());, there are no memory issues.
About this issue
- Original URL
- State: open
- Created 4 months ago
- Reactions: 4
- Comments: 30 (18 by maintainers)
@lforst It’s okay. Just work a little bit harder, and one day, you will become one of them.
In any case, I doubt that anyone will give you access to production code or a snapshot from production since this would be a big blow to the project’s security.
@lforst I’m sure any senior engineer with experience can easily reproduce the issue.
Perhaps it is @Lms24 but what caught my attention was this comment:
Which is true for us.
I have a feeling that it’s this PR because with that one onwards we started storing scopes on spans. I haven’t investigated yet but we may have a bunch of hard references keeping a tree of stuff around. We should fix this somehow.
We don’t use tracingHandler. I’ve tried to distill our use of sentry in our main app file (Coffeescript), see below:
Take what I write here with a grain of salt but to me it kinda makes sense that the incorrect order causes issues.
The
requestHandlercreates a new async context/execution context to avoid leaking data between requests. ThetracingHandleron the other hand will create a root span/transaction and set it on the current scope. Now the unfortunate interaction here is that with the incorrect order:tracingHandlerstarts a span/transaction and puts it onto the current hub’s scoperequestHandlercreates a new async context which in turn creates a new hub with a new scoperequestHandlerasync context is exitedtracingHandlerends the span/transaction.My guess is that this works fine as long as the server load is low but if multiple simultaneous requests are going on, the scope that’s active when the
tracingHandleris doing its thing might loose transactions which should actually be isolated via async context. Now for some reason (probably because we’re keeping references to them somewhere) the “lost” transactions aren’t garbage collected.To other folks affected by the leak in this thread: Does switching the order (i.e.)
requestHandlerbeforetracingHandlerfix the issue for you?Also worth noting that with v8 of the SDK, these handlers won’t be required anymore as performance instrumentation is handled by OpenTelemetry under the hood.
@lforst servus
Sorry for the late reply, I finally had some time to look more into this. Similar to you, I was unable to reproduce the memory leak using @xr0master s repo. Or at least memory never really spiked as much as I anticipated, no matter how much load i put on the server. I ended up taking our app and stripped things down until I finally found the culprit.
I dropped a reproduction here (thx @xr0master , i just repurposed yours) with some instructions on how to monitor / reproduce it.
https://github.com/Patrick-Ullrich/sentry-mem-leak
I believe the culprit is setting
tracingHandlerbeforerequestHandler. Which the docs do point out not to do.https://docs.sentry.io/platforms/node/guides/express/:
Since we have tons of different middleware it took a while to notice that they were reversed in our main app and it never seemed to introduce an issue before.
@lforst let me know if this helps and / or if i can help in any other way.
I also went through some versions to see when it was introduced. It seems to start in 7.100.0. 7.99 and before works just fine.
@xr0master
doing my best!
It has happened before and helped tremendously. Also lets us rule out configuration issues. Having a working minimal reproduction and fixing this is in everybody’s interest.