aws-iam-authenticator: error: You must be logged in to the server (Unauthorized) -- same IAM user created cluster

My AWS CLI credentials are set to the same IAM user which I used to create my EKS cluster. So why would kubectl cluster-info dump give me error: You must be logged in to the server (Unauthorized)?

kubectl config view is as follows:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://64859043D67EB498AA6D274A99C73C58.yl4.us-east-2.eks.amazonaws.com
  name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
contexts:
- context:
    cluster: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
    user: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
  name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
current-context: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-2:629054125090:cluster/EKSDeepDive
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - EKSDeepDive
      command: aws-iam-authenticator
      env: null
aws sts get-caller-identity
{
    "UserId": "AIDAJAKDBFFCB4EVPCQ6E",
    "Account": "629054125090",
    "Arn": "arn:aws:iam::629054125090:user/mrichman"
}

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 1
  • Comments: 44 (1 by maintainers)

Most upvoted comments

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn’t picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what…

You need to map IAM users or roles into the cluster using the aws-auth ConfigMap. This is done automatically for the user who creates the cluster.

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Here is a script example for adding a role:

https://eksworkshop.com/codepipeline/configmap/

So we were hitting this issue with IAM users that didn’t initially create the EKS cluster, they always got error: You must be logged in to the server (Unauthorized) error when using kubectl (even though aws-iam-authenticator gave them some token).

We had to explicitly grant our IAM users access to the EKS cluster in our Terraform code.

In my case I created the cluster with a role and then neglected to use the profile switch when using the update-kubeconfig command. When I modified the command to be aws eks update-kubeconfig --name my-cluster --profile my-profile a correct config was written and kubectl started authenticating correctly.

What this did was modify my env to (spacing munged):

env: - name: AWS_PROFILE value: my-profile

Found the issue with help from AWS support - it appears the aws-iam-authenticator wasn’t picking up the credentials properly from the path

Manually running

export AWS_ACCESS_KEY_ID=KEY
export AWS_SECRET_ACCESS_KEY=SECRET-KEY
aws-iam-authenticator token -i cluster-name

Then pulling out the token and running

aws-iam-authenticator verify -t k8s-aws-v1.really_long_token -i cluster-name

to make sure its all working

Oddly the aws-iam-authenticator did give me a token - I have no idea to what…

Thank you ,its working…

There are instructions for fixing this issue in the EKS docs and the customer support blog as well.

@napalm684 use this guide: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

With that said, it is not working for me when I try to add an IAM user

I ended up blowing away the cluster and creating a new one. I never had the issue again on any other cluster. I wish I had better information to share.

Got this today and the cause & solution was different

If you created the cluster as some user, but not as a role / roled one (you can switch to roles in AWS console from IAM roles panel), and your kube config was created by using --role-arn parameter ie.

aws eks --region eu-central-1 update-kubeconfig --name my-cluster-name --role-arn arn:aws:iam::111222333444:role/my-eks-cluster-role

and got this error message then just remove a --role-arn parameter.

My understanding is that you can bind users to a role so they can perform operation on that specific cluster, but the user for some reason (maybe missing entry in Trusted entities) was not bound to the role at cluster creation phase. This is not an error since I suppose I could add myself to my cluster role and this would work fine

Adding users to roles is probably there: https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Aside of that - its very misleading when you log in as user with AdministratorAccess policy (bascially Allow *) and there is no assumption to take the cluster role

TL:DR remove --role-arn

I had the same issue end I solved when I set the aws_access_key_id and aws_secret_access_key of the who has created the cluster on AWS (In me case, the root user) but I made a new profile in the .aws/credentials

for example new profile:

[oscarcode] aws_access_key_id = XXXX aws_secret_access_key = XXXX region = us-east-2

So in my kubernetes config has

exec: apiVersion: client.authentication.k8s.io/v1alpha1 args: - token - -i - cluster_name command: aws-iam-authenticator env: - name: AWS_PROFILE value: oscarcode

Just wanted to add that you can add your credentials profile in your ~/.kube/config file. If your kubectl config view shows env: null this might be the issue.

 user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - <some name>
      command: aws-iam-authenticator
      env:
        - name: AWS_PROFILE
          value: "<profile in your ~/.aws/credentials file>"

https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

I don’t like environmental variables personally and this is another option if you have a credentials file for AWS.

I resolved this issue by checking/updating the date/time on my client machine.