snowflake-connector-nodejs: 503 Slow Down from AWS S3 should be retried by the client
- What version of NodeJS driver are you using?
version 1.6.20
- What operating system and processor architecture are you using?
AWS Lambda
- What version of NodeJS are you using?
AWS Lambda NodeJS
- What are the component versions in the environment (
npm list)?
5.Server version:
7.16.1
- What did you do?
A few times a day I get the following error
"errorMessage":"Request to S3/Blob failed.","code":402002,"name":"LargeResultSetError","message":"Request to S3/Blob failed.","response":{"status":503,"statusCode":503,"statusMessage":"Slow Down"
it happens for different queries, and is sporadic
- What did you expect to see?
I expected the client to retry fetching the results after a Slow Down error with a configurable backoff strategy. Instead the client throws an error.
- Can you set logging to DEBUG and collect the logs?
I can’t set the logging to DEBUG
- What is your Snowflake account identifier, if any? (Optional)
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 19
fix is merged and will be part of the next connector release. Since we just released one couple of hours ago, this will be in July’s release. or can be installed directly from github e.g. with
npm i https://github.com/snowflakedb/snowflake-connector-nodejs.gitI opened this new issue
https://github.com/snowflakedb/snowflake-connector-nodejs/issues/514
hi and thank you for submitting this request. could you please add
to your code to configure trace (‘debug’) level logging ?
Generally our customers don’t seem to have such problems so I’m curious to see the situation how the 3.500 / 5.500 per second, per prefix limit is hit in AWS or what else might be the cause for this situation. Please also make sure you sanitize the logs before sharing - or you can just open a case with Snowflake Support if you prefer that method for sharing information. Thank you in advance !