sentry-javascript: Sentry HTTP 429 errors causing AWS Lambda InvokeErrror (status 502 response) result
- Review the documentation: https://docs.sentry.io/
- Search for existing issues: https://github.com/getsentry/sentry-javascript/issues
- Use the latest release: https://github.com/getsentry/sentry-javascript/releases
- Provide a link to the affected event from your Sentry account
Package + Version
-
@sentry/browser -
@sentry/node6.17.6 -
raven-js -
raven-node(raven for node) - other:
-
@sentry/serverless6.17.6 -
@sentry/tracing6.17.6
-
Version:
6.17.6
Description
Sometimes, due to large Sentry API usage, we are getting HTTP 429 error responses with Sentry SDK in our Node.js AWS Lambda functions.
However, the problem is that sometimes we end up getting this error caused by Sentry SDK that results in a seemingly unhandleable InvokeError (resulting in AWS lambda HTTP 502 response) which is problematic for us because our actual business logic is working just fine.
2022-02-15T18:50:47.224Z 67dc62db-a670-4e84-ad4a-679145ecd1e1 ERROR Invoke Error
{
"errorType": "SentryError",
"errorMessage": "HTTP Error (429)",
"name": "SentryError",
"stack": [
"SentryError: HTTP Error (429)",
" at new SentryError (/var/task/node_modules/@sentry/utils/dist/error.js:9:28)",
" at ClientRequest.<anonymous> (/var/task/node_modules/@sentry/node/dist/transports/base/index.js:212:44)",
" at Object.onceWrapper (events.js:520:26)",
" at ClientRequest.emit (events.js:400:28)",
" at ClientRequest.emit (domain.js:475:12)",
" at HTTPParser.parserOnIncomingClient (_http_client.js:647:27)",
" at HTTPParser.parserOnHeadersComplete (_http_common.js:127:17)",
" at TLSSocket.socketOnData (_http_client.js:515:22)",
" at TLSSocket.emit (events.js:400:28)",
" at TLSSocket.emit (domain.js:475:12)"
]
}
Our expectation is that internal Sentry errors should not cause an outage in our own APIs.
We didn’t found any way to handle/catch this error, because it looks like it’s failing outside of our Sentry calls such as Sentry.init(), Sentry.wrapHandler() etc.
The 429 error response we are mostly certain that is due to an exceess in our Transactions quota.
When decreasing traceSampleRate config in Sentry.init() from 0.2 to 0, we stopped having this error.
We were seeing this issue was happening consistently in random cases, seemingly the same as the value we had for traceSampleRate. We’ve also tried setting it to 1 to confirm, and we could confirm a 100% error rate.
So currently, the workaround we are using is disabling this entire feature by setting it to 0, and no more unhandled Sentry 429 HTTP errors were thrown. Still, it doesn’t seem to be correct solution having to disable entire features to have an external dependency not breaking my app.
For completeness, this is our current config:
Sentry.init({
debug: SLS_STAGE === 'dev',
dsn: sentryKey,
tracesSampleRate: 0,
environment: SLS_STAGE,
release: `${SLS_SERVICE_NAME}:${SLS_APIG_DEPLOYMENT_ID}`,
})
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 12
- Comments: 15 (7 by maintainers)
Option released with https://github.com/getsentry/sentry-javascript/releases/tag/6.18.0
I would love to know if a meeting has taken place to decide if
ignoreSentryErrorsshould be on or off as default. I wonder how many people have hit this issue due to hitting sentry transaction limits to then quickly pay Sentry more money to upgrade their plan…sincerely
a disappointed customer
I have to agree having an error reporting package that can bring down your application is incredibly scary. I added a lambda wrapper feeling confident it would sit there silently and not interrupt the actual logic.
The decision to not make
ignoreSentryErrorsthe default behaviour seems arguable. Lots more real world applications are going to have outages under volume until they stumble across this ticket. Or at least a too many requests error specifically shouldn’t cause down time.We’ve just had a fairly catastrophic failure on our lambdas due to Sentry being integrated and we didn’t set the
ignoreSentryErrorsoption to true. It feels really odd that it isn’t enabled by default. Presumably to maintain backwards compatibility, but it feels as though having it off by default is a bug rather than a feature.It’s not easy to catch in testing before going to production and hitting higher loads of traffic as you’re less likely to hit a rate limit.
This is the logic that we are hitting.
https://github.com/getsentry/sentry-javascript/blob/1bf988322a6adb931905305747f4fafb742002b1/packages/node/src/transports/base/index.ts#L248-L252
Sentry though doesn’t capture internal
SentryErrorsthough. Is AWS Lambda monitoring errors that bubble up to certain handlers and setting a response based on that?Perhaps there is a way we can edit https://github.com/getsentry/sentry-javascript/blob/master/packages/serverless/src/awslambda.ts to address that. Maybe we could monkey patch whatever aws lambda is listening for to filter out
SentryError.We could also try and not throw errors for 429s since they are pretty common.
What do you think?