aws-iam-authenticator: Always getting error: You must be logged in to the server (Unauthorized)
I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.
I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.
So, this is what I did
- create a user to access aws-cli name
crop-portal - create a role for EKS name
crop-cluster - create EKS cluster via AWS console with the role
crop-clusternamecrop-cluster(cluster and role have the same name) - run AWS configure for user
crop-portal - run
aws eks update-kubeconfig --name crop-clusterto update the kube config - run
aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access - copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
- run
aws sts get-caller-indentityand now the result says it used assume role already
{
"UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
"Account": "529972849116",
"Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
- run
kubectl clusterand alwaysget error: You must be logged in to the server (Unauthorized)
when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed
&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}
I don’t know what is wrong or what I miss. I try so hard to get it works but I really don’t know what to do after this. Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.
Please someone helps 😦
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 5
- Comments: 16 (2 by maintainers)
Stale issues rot after 30d of inactivity. Mark the issue as fresh with
/remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.If this issue is safe to close now please do so with
/close.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
I’ve tried to solve this case, used root ARN and custom ARN, adding user into role, creating cluster with root, creating cluster with custom id, creating cluster in console with both id, using aws, using aws-iam-authenticator, using custom profile in .aws/config, changing token in .kube/config to real token, etc., etc., etc.
aws sts get-caller-identity worked. aws eks --region us-west-2 describe-cluster --name eks --query cluster worked. aws eks --region us-west-2 update-kubeconfig --name eks --role-arn arn:aws:iam:😗************:role/eksrole worked.
All information seems correct but I was not able to achieve to access kubernetes console.
A lot of people are talking about this message in communities here and there since 2018. Thus, I will make this issue is opened.
Had a similar issue recently. It ended up being something simple that I may not have noticed for a while had there not been another engineer on the team who had run into it before.
The rolearn I had added was a copy & paste of the ARN shown in the console. So it looked something like this:
Turns out you are not supposed to add the ARN path here (not sure why). Removing it, so now we had
- rolearn ... :role/${role_name}, fixed the issue for me.If here is an AWS developers who are in EKS, please prepare your new linux machine and get kubectl, aws and aws-iam-authenticator, and make a new AWS account and try to create a new EKS cluster. And can you update EKS document on AWS website? Thanks.
Might not be the same cause but I just ran into this and in my case I created my cluster on a different computer as a different IAM user than the computer/user I was trying to access the cluster with.
The instructions from here helped me figure out that I needed to add the user that didn’t create the cluster to the configmap.
From 2nd computer:
From computer/user that created cluster:
If you’re hitting this my suspicion would be that you created the cluster as a different user/role than you are trying to access with. Maybe instead of using the web console try creating with aws-cli or eksctl.
Hope this helps someone.
just run the below command with proper region and cluster name . it worked. aws eks update-kubeconfig --region us-west-2 --name my-cluster