aws-sdk-js: Since Version 2.575.0 - CognitoIdentity.getOpenIdTokenForDeveloperIdentity is too slow
Confirm by changing [ ] to [x] below to ensure that it’s a bug:
- I’ve gone though Developer Guide and API reference
- I’ve checked AWS Forums and StackOverflow for answers
- I’ve searched for previous similar issues and didn’t find any solution
Describe the bug
Is the issue in the browser/Node.js?
Node.js
If on Node.js, are you running this on AWS Lambda?
No
Details of the Node.js version
v10.7
SDK version number
v2.585.0
To Reproduce (observed behavior)
Today, I updated aws-sdk
on my project from version 2.555.0
to the latest version 2.585.0
. Then, I figured out that the function: cognito.getOpenIdTokenForDeveloperIdentity
run longer than expected.
- Before updating, it only took about
200-300ms
to get the result - After updating, it took about
4-5s
to get the result
The following is the code snippet that I use and the the application is running on a EC2 machine.
const Aws = require('aws-sdk');
async function getToken(username) {
const cognito = new Aws.CognitoIdentity({ region: process.env.AWS_REGION });
const Logins = {
'my-logins': username
};
const result = await cognito.getOpenIdTokenForDeveloperIdentity({
IdentityPoolId: process.env.IDENTITY_POOL_ID,
Logins,
TokenDuration: 20,
}).promise();
return result.Token;
}
UPDATE 1
My buddy find out that the problem occurred since version 2.575.0
UPDATE 2 (05 Feb 2020)
- I executed the script on ubuntu 18.04 machine and it took only 100ms to run the script
- I installed docker and executed the script on a container that built on the image base mhart/alpine-node:10.7. It took about 5s to run the script
- The same problem with image base node:12.16.0-alpine3.11
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 1
- Comments: 33 (10 by maintainers)
This appears to be related to IMDSv2 being the default starting with
2.575.0
. I can see that with the newfetchMetadataToken
method added in that release.I can reproduce this issue by executing
curl -XPUT 'http://169.254.169.254/latest/api/token' -H 'x-aws-ec2-metadata-token-ttl-seconds: 21600'
inside a Docker container running in EC2; it never responds. In the SDK, it times out after 1 second, and retries 3 times, which is where the 4 seconds comes from. If I execute that command on the host, it responds immediately.The reason why IMDSv2 does not work from inside a Docker container is explained here: https://stackoverflow.com/a/62326320/13124514
I’ve also noticed that setting
AWS.MetadataService.disableFetchToken = true
doesn’t actually modifyself.disableFetchToken
inside theloadCredentials
method, which is why setting it doesn’t do anything. If it did set it correctly, that would resolve our issue by keeping us on IMDSv1.So it seems there are two issues at play:
curl -XPUT 'http://169.254.169.254/latest/api/token'
from inside a Docker container never respondsdisableFetchToken = true
EDIT: This is probably unsupported, but it works:
AWS.MetadataService.prototype.disableFetchToken = true
I had the same issue happen in the last two weeks. I finally found that I can run this line before calling into AWS resources and it has resolved my issue. This is for sure a hack, but gets past my issue for now.
Keep in mind, this only works in raw javascript, the Typescript definition does not expose access to this property.
Hey @khacminh, I believe the SDK didn’t made any major updates, this would be how EC2 is handling things, reached out to the service team, will update you once I hear back from the service team.
I was having the same issue when upgrading from 2.574.0 to 2.575.0, the S3 upload response time went from 200 ms to 4 seconds. Setting the
disableFetchToken
totrue
worked and now the response time is back to 200ms.I ran into the same issue: calling s3.getObject() on a 33 byte file from within a docker container on an ec2 instance takes about 5 seconds using node aws-sdk v 2.826.0. Disabling the fetch token resolved the issue. I reverted back to aws-sdk v2.574.0 and getObject() was as fast as expected without having to disable the fetch token.
I followed the discussion/links from the stackoverflow of @mhassan1 (thanks!) In https://github.com/aws/aws-sdk-ruby/issues/2177 they also discuss the relation with running your own kubernetes cluster which we are using. Did not find a definitive solution yet (pinning the SDK I don’t like).
@chartrand22 also talks about the IMDSv2 in https://github.com/aws/aws-sdk-js/issues/3024
Hi @ajredniwja,
I tried @mhassan1 workaround and it does solve the problem. Could you please take a look at @mhassan1 explanation. If it makes sense to you, please escalate the issue as bug.
Hi @ajredniwja, can you share if investigation is still in progress, something has been found or if you require additional info?