sentry-javascript: [node][7.102.0] Suspicion of Memory Leak

Is there an existing issue for this?

How do you use Sentry?

Sentry Saas (sentry.io)

Which SDK are you using?

@sentry/node

SDK Version

7.102.0

Framework Version

espress.js

Link to Sentry event

No response

SDK Setup

Sentry.init({
  dsn: process.env.SENTRY_DNS,
  release: 'n/a',
  environment: process.env.NODE_ENV,
  integrations: [
    new Sentry.Integrations.Http({ tracing: true }),
    new Sentry.Integrations.Express({
      app,
      methods: ['post', 'get', 'delete'],
    }),
  ],

  tracesSampleRate: 0,
});

Steps to Reproduce

  1. add app.use(Sentry.Handlers.requestHandler());
  2. run request
  3. check profile

Expected Result

n/a

Actual Result

I would like to point out that the problem is not one-to-one. That is, not every request gets stuck in memory. After dozens of requests, only one object can be added, and sometimes more. I didn’t find any particular pattern.

Screenshot 2024-02-22 at 21 37 15 Screenshot 2024-02-22 at 21 37 42

You may notice that multiple hubs are created, and the data (in locals) is not removed from the response object (and the response object is not deleted either).

if remove app.use(Sentry.Handlers.requestHandler());, there are no memory issues.

About this issue

  • Original URL
  • State: open
  • Created 4 months ago
  • Reactions: 4
  • Comments: 30 (18 by maintainers)

Most upvoted comments

@lforst It’s okay. Just work a little bit harder, and one day, you will become one of them.

In any case, I doubt that anyone will give you access to production code or a snapshot from production since this would be a big blow to the project’s security.

@lforst I’m sure any senior engineer with experience can easily reproduce the issue.

Perhaps it is @Lms24 but what caught my attention was this comment:

I also went through some versions to see when it was introduced. It seems to start in 7.100.0. 7.99 and before works just fine.

Which is true for us.

I have a feeling that it’s this PR because with that one onwards we started storing scopes on spans. I haven’t investigated yet but we may have a bunch of hard references keeping a tree of stuff around. We should fix this somehow.

We don’t use tracingHandler. I’ve tried to distill our use of sentry in our main app file (Coffeescript), see below:

beforeSend = (event, hint) ->
  if _.get(hint, "originalException.skipSentry")
    logger.debug "Not sending error to sentry: #{event.exception.values[0].value}"
    return null
  isNonErrorException = _.get(event, "exception.values[0].value", "").startsWith('Non-Error exception captured') or _.get(hint.originalException, "message", "").startsWith('Non-Error exception captured')
  if isNonErrorException
    return null

  return event

# See here: https://github.com/getsentry/sentry-javascript/issues/1661
filterIntegrations = (defaultIntegrations) ->
  return defaultIntegrations.filter((x) -> not x.name.match(/onuncaught|onunhandled/i))

Sentry = require "@sentry/node"
Sentry.init({sendTimeout: 5, beforeSend: beforeSend, integrations: filterIntegrations})

process.on("unhandledRejection", (reason, p) ->
  logger.error("Unhandled Rejection at: Promise #{p}, reason: #{reason}")
  if reason.stack? and (config.nodeEnv is "development")
    logger.error reason.stack
  Sentry.captureException(reason)
)

process.on("uncaughtException", (err, origin) ->
  logger.error "Uncaught Exception, origin: #{origin}, reason: #{err.message}"
  #if reason.stack?
  #  logger.error reason.stack
  Sentry.captureException(err)
)

# Setting up the Express app
app = express()

app.use(Sentry.Handlers.requestHandler())

app.use(express.json({limit: "25mb", type: ["application/json", "application/vnd.api+json"]}))

require("./routes")(app)

app.use Sentry.Handlers.errorHandler({shouldHandleError: (err) ->
  if err.isJoi
    return false
  if err.status and err.status > 499
    return true
  return false
})

Take what I write here with a grain of salt but to me it kinda makes sense that the incorrect order causes issues.

The requestHandler creates a new async context/execution context to avoid leaking data between requests. The tracingHandler on the other hand will create a root span/transaction and set it on the current scope. Now the unfortunate interaction here is that with the incorrect order:

  1. Request comes in
  2. tracingHandler starts a span/transaction and puts it onto the current hub’s scope
  3. requestHandler creates a new async context which in turn creates a new hub with a new scope
  4. The request happens within the async context and finishes
  5. requestHandler async context is exited
  6. tracingHandler ends the span/transaction.

My guess is that this works fine as long as the server load is low but if multiple simultaneous requests are going on, the scope that’s active when the tracingHandler is doing its thing might loose transactions which should actually be isolated via async context. Now for some reason (probably because we’re keeping references to them somewhere) the “lost” transactions aren’t garbage collected.

To other folks affected by the leak in this thread: Does switching the order (i.e.) requestHandler before tracingHandler fix the issue for you?

Also worth noting that with v8 of the SDK, these handlers won’t be required anymore as performance instrumentation is handled by OpenTelemetry under the hood.

@lforst servus

Sorry for the late reply, I finally had some time to look more into this. Similar to you, I was unable to reproduce the memory leak using @xr0master s repo. Or at least memory never really spiked as much as I anticipated, no matter how much load i put on the server. I ended up taking our app and stripped things down until I finally found the culprit.

I dropped a reproduction here (thx @xr0master , i just repurposed yours) with some instructions on how to monitor / reproduce it.

https://github.com/Patrick-Ullrich/sentry-mem-leak

I believe the culprit is setting tracingHandler before requestHandler. Which the docs do point out not to do.

https://docs.sentry.io/platforms/node/guides/express/:

// The request handler must be the first middleware on the app
app.use(Sentry.Handlers.requestHandler());
// Creates a mem leak
app.use(Sentry.Handlers.tracingHandler());
app.use(Sentry.Handlers.requestHandler());
// Everything is fine
app.use(Sentry.Handlers.requestHandler());
app.use(Sentry.Handlers.tracingHandler());

Since we have tons of different middleware it took a while to notice that they were reversed in our main app and it never seemed to introduce an issue before.

@lforst let me know if this helps and / or if i can help in any other way.

I also went through some versions to see when it was introduced. It seems to start in 7.100.0. 7.99 and before works just fine.

@xr0master

work a little bit harder, and one day, you will become one of them

doing my best!

In any case, I doubt that anyone will give you access to production code or a snapshot from production since this would be a big blow to the project’s security.

It has happened before and helped tremendously. Also lets us rule out configuration issues. Having a working minimal reproduction and fixing this is in everybody’s interest.