aws-iam-authenticator: error: You must be logged in to the server (Unauthorized)

Error:

~/bin » kubectl get svc
error: the server doesn't have a resource type "svc"
~/bin » kubectl get nodes
error: You must be logged in to the server (Unauthorized)
~/bin » kubectl get secrets
error: You must be logged in to the server (Unauthorized)

KUBECONFIG

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://localhost:6443
  name: docker-for-desktop-cluster
- cluster:
    certificate-authority-data: REDACTED
    server: https://REDACTED.us-east-1.eks.amazonaws.com
  name: eksdemo
- cluster:
    certificate-authority-data: REDACTED
    server: https://api.k8s.domain.com
  name: k8s
contexts:
- context:
    cluster: eksdemo
    user: aws
  name: aws
- context:
    cluster: k8s
    user: k8s
  name: devcap
- context:
    cluster: docker-for-desktop-cluster
    user: docker-for-desktop
  name: docker-for-desktop
- context:
    cluster: k8s
    user: k8s
  name: k8s
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eksdemo
      - -r
      - arn:aws:iam::REDACTED:role/eks_role
      command: heptio-authenticator-aws
      env:
      - name: AWS_PROFILE
        value: k8sdev
- name: docker-for-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
- name: k8s
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    password: REDACTED
    username: admin
- name: k8s-basic-auth
  user:
    password: REDACTED
    username: admin
- name: k8s-token
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED
    token: REDACTED

STS Assume

AWS_PROFILE=k8sdev aws sts assume-role --role-arn arn:aws:iam::REDACTED:role/eks_role --role-session-name test
{
    "Credentials": {
        "AccessKeyId": "REDACTED",
        "SecretAccessKey": "REDACTED",
        "SessionToken": "REDACTED",
        "Expiration": "2018-06-21T21:15:58Z"
    },
    "AssumedRoleUser": {
        "AssumedRoleId": "REDACTED:test",
        "Arn": "arn:aws:sts::REDACTED:assumed-role/eks_role/test"
    }
}

exports

~/bin » echo $AWS_PROFILE
k8sdev
~/bin » echo $DEFAULT_ROLE
arn:aws:iam::REDACTED:role/eks_role

heptio-authenticator-aws token

~/bin » heptio-authenticator-aws token -i eksdemo
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"token":"k8s-aws-v1.REDACTED"}}

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 19
  • Comments: 35 (8 by maintainers)

Most upvoted comments

@bilby91 nice! Yeah I’m not sure of the precedence, but that sounds right. (For others: unset AWS_ACCESS_KEY_ID, unset AWS_SECRET_ACCESS_KEY and unset AWS_SESSION_TOKEN if you don’t want to use them).

I’ll post this here for anyone else running into this, I realise the ticket is closed but hope to help someone.

Even after reading this whole thread, it still took me a good hour to figure out why my cluster is returning error: the server doesn't have a resource type "svc" when calling kubectl get svc. Context is everything, if your setup is different the following probably won’t work, this is for folks who use SAML federated logins via a script that does sts assume-role-with-saml under the hood (Azure AD, OneLogin, Okta), this results in temporary credentials which are typically stored in the ~/.aws/credentials file. In many cases multiple accounts are used so there may be a dev, prod, or other profile section in that file.

Steps to take:

  1. Login as you normally would to generate temporary credentials in ~/.aws/credentials.
  2. Verify AWS_PROFILE=<profile> aws-iam-authenticator token -i <cluster_name> works. In my case, it returned a token without errors.
  3. Make sure to unset any lingering AWS_* variables, this bit me, I didn’t think it mattered and didn’t realise it was messing things up.
  4. Generated the kube config with aws --profile=<profile> eks update-kubeconfig --name <cluster_name>, this creates a file at something like ~/.kube/config.
  5. Edit ~/.kube/config to add the AWS_PROFILE env variable, this should be the same profile you used to launch the cluster.
  6. kubectl get svc should work. \o/

Someone please help, will send funds: https://stackoverflow.com/questions/56227322/passing-eks-token-to-other-kubectl-cli-commands

I can get a token, using aws eks get-token --cluster-name eks1

but still can’t authenticate

will upvote the shit out of your answer if you can help thanks

Okey, I think I have solved the issue. It seems that I had some hidden env vars for AWS that seem to have been overriding the AWS_PROFILE option. Would AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have more preference over AWS_PROFILE env vars ?

I seem to solve it with these commands:

unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset AWS_SESSION_TOKEN

Apparently, having these in the environment was blocking kubectl from using the AWS_PROFILE from the kubeconfig.

I created the profile like this: aws configure --profile <profile-name> which creates a new profile AWS_PROFILE instead of overwriting the default profile.

I set the profile like this: aws eks update-kubeconfig --region <region> --name <cluster-name> --profile <profile-name>

Check you kubeconfig if the right profile is set: kubectl config view

user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - <region>
      - eks
      - get-token
      - --cluster-name
      - <cluster-name>
      command: aws
      env:
      - name: AWS_PROFILE
        value: <profile-name>

Issue: openshift okd cluster can run any command, oc status , oc get svc … error logs: oc status error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized)

workaround:~ you need to login in your cluster first, but I don´t know why have to login first. $oc login then input the account/password which you create the cluster. then everything is working well

@BlackBsd

That’s exactly what I have:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::XXXXXXXXXX:role/ekscluster-workers1-NodeInstanceRole-1VY13IIH9VLKW
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::XXXXXXXXXX:role/AWS_Root_Role
      username: role_root
      groups:
        - system:masters
    - rolearn: arn:aws:iam::XXXXXXXXXX:role/AWS_SysAdmin_Role
      username: role_sysadmin
      groups:
        - system:masters

Your kubeconfig is missing the arn to match though:

- name: black-dev-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - ekscluster
      - -r
      - arn:aws:iam::XXXXXXXXXX:role/AWS_SysAdmin_Role
      command: aws-iam-authenticator
      env:
      - name: AWS_PROFILE
        value: dev_nonbaa

And whichever role you assumed when creating the cluster is the one you’ll want to be assuming in aws-azure-login and inside your kubeconfig… once you add all additional roles to the configMap you can use the others but initially you have to use the one that created it.

I believe you have hit one of the more confusing aspects of the user experience, but let me try to explain. Whatever role/user you used to create the cluster, is automatically granted system:masters. You should then use that user/assume that role to modify the authenticator configmap, and add any additional mappings for admins, devs, etc. Right now you cannot modify that admin user, but this is on our roadmap.

What is the -r role used for in the KUBECONFIG?

This is role that is the authenticator will assume before crafting the token, i.e. it is the identity of the user. This role arn should also be present in the configmap, where it should be mapped to a kubernetes user and groups.

This blog post might help, specifically section 3 (Configure Role Permissions via Kubernetes RBAC). The authenticator configmap and RBAC setup is similar regardless of what type of AWS identity you have, i.e. user, role, federated user, …

I could create another role say eks_admin for users to assume with -r? (what perms does this eks_admin role need?)

You can have as many roles as you want for users to assume, they just need to all be present in the authenticator configmap.

Is arn:aws:iam::REDACTED:role/eks_role registered with your cluster? Or did you do the cluster creation with that role assumed?

If not use the credentials of the user that created the cluster and remove that role from your KUBECONFIG then you can register that role with it.

@dreampuf Can you Edit your ~/.aws/config to add the role_arn and MFA serial number into a new profile:

[profile read-only]
region=us-east-1

[profile admin]
source_profile = read-only
role_arn = arn:aws:iam::123456789012:role/admin-access
mfa_serial = arn:aws:iam::123456789012:mfa/dreampuf

Then if you specify the kubeconfig file like this (top part omitted):

  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      env:
        - name: AWS_PROFILE
          value: "admin"

This will actually use a different code path through the SDK and you will be prompted for an MFA.

@christopherhein yes but as I mentioned when all our users login they will get different roles in their Federated logins… Some will get Developer, Operator, Sysadmin, etc. So creating through the console assuming one of these roles would make it so that a developer and a sysadmin couldn’t kubectl the same cluster. What is the -r role used for in the KUBECONFIG?

I could use eks_role for the cluster control plane I could create another role say eks_admin for users to assume with -r? (what perms does this eks_admin role need?) I could setup trust relationship on eks_admin to Developer, Operator, Sysadmin? Then everyone could control via kubectl?