snowflake-connector-nodejs: 503 Slow Down from AWS S3 should be retried by the client

  1. What version of NodeJS driver are you using?

version 1.6.20

  1. What operating system and processor architecture are you using?

AWS Lambda

  1. What version of NodeJS are you using?

AWS Lambda NodeJS

  1. What are the component versions in the environment (npm list)?

5.Server version:

7.16.1

  1. What did you do?

A few times a day I get the following error

"errorMessage":"Request to S3/Blob failed.","code":402002,"name":"LargeResultSetError","message":"Request to S3/Blob failed.","response":{"status":503,"statusCode":503,"statusMessage":"Slow Down"

it happens for different queries, and is sporadic

  1. What did you expect to see?

I expected the client to retry fetching the results after a Slow Down error with a configurable backoff strategy. Instead the client throws an error.

  1. Can you set logging to DEBUG and collect the logs?

I can’t set the logging to DEBUG

  1. What is your Snowflake account identifier, if any? (Optional)

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 19

Most upvoted comments

fix is merged and will be part of the next connector release. Since we just released one couple of hours ago, this will be in July’s release. or can be installed directly from github e.g. with npm i https://github.com/snowflakedb/snowflake-connector-nodejs.git

this is interesting @Lumengrid but likely has nothing to do with the issue handled on this particular Issue (certain failures are not retried)

if you wish us to investigate your problem with that particular column or its data, please either open a Snowflake Support case, or open a new Issue here and share all the details (table definition, test/mock dataset, reproduction, and so on). This way we can keep the issues consistent and organized and focused. Thank you in advance !

I opened this new issue

https://github.com/snowflakedb/snowflake-connector-nodejs/issues/514

hi and thank you for submitting this request. could you please add

snowflake.configure({logLevel : 'trace'});

to your code to configure trace (‘debug’) level logging ?

Generally our customers don’t seem to have such problems so I’m curious to see the situation how the 3.500 / 5.500 per second, per prefix limit is hit in AWS or what else might be the cause for this situation. Please also make sure you sanitize the logs before sharing - or you can just open a case with Snowflake Support if you prefer that method for sharing information. Thank you in advance !