sentry-javascript: Memory leaks after updating from 7.37.1 to 7.37.2

Is there an existing issue for this?

How do you use Sentry?

Sentry Saas (sentry.io)

Which SDK are you using? If you use the CDN bundles, please specify the exact bundle (e.g. bundle.tracing.min.js) in your SDK setup.

@sentry/nextjs

SDK Version

7.37.2

Framework Version

NextJS 13.1.5

Link to Sentry event

No response

SDK Setup

Sentry.init({
  dsn: DSN,
  environment: ENVIRONMENT,
  release: RELEASE,
  tracesSampler: (samplingContext) => !!samplingContext.parentSampled || TRACES_SAMPLE_RATE,
  integrations: [
    // new Sentry.Replay({sessionSampleRate: 0, errorSampleRate: 1, maskAllInputs: true, useCompression: false}), // tried this
    // new Sentry.Replay({sessionSampleRate: 0, errorSampleRate: 1, maskAllInputs: true}), // then tried this
    // then tried without Replay
    new Sentry.BrowserTracing({tracingOrigins: ["redacted"]}),
    new Integrations.CaptureConsole({levels: ["error"]}),
    new Integrations.ExtraErrorData({depth: 10}),
  ],
});

Steps to Reproduce

  1. Update sentry packages from 7.37.1 to 7.37.2 and disabled replay compression
  2. Observe unexpected changes server memory utilization and server crashes
  3. Re-enable replay compression
  4. Observe memory changes still present after re-enabling replay compression (i.e. establish that compression enabled/disabled is not the cause)
  5. Remove replay’s completely
  6. Observe memory changes still present without replays (i.e. establish that replays are not the cause)

Expected Result

No changes to memory utilization, no server crashes

Actual Result

After updating the packages, memory utilisation changed very dramatically with huge variations. On several occasions this brought down the instances (503s reported by users). Consistent across several clients, e.g.

image image image

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 21 (13 by maintainers)

Most upvoted comments

flush is simply waiting for outbound requests to Sentry to be completed. I’d be surprised if this is the culprit. Hosting provider shouldn’t matter either.

Due to the spiky nature of the graph I would put my bet on some large unexpected computation. Usually, the only heavy computation we do is to normalize/serialize events before sending them to sentry. It could very well be that req contains a very large object in this case.

We’re investigating this.