aws-iam-authenticator: error: You must be logged in to the server (Unauthorized)
Error:
~/bin » kubectl get svc
error: the server doesn't have a resource type "svc"
~/bin » kubectl get nodes
error: You must be logged in to the server (Unauthorized)
~/bin » kubectl get secrets
error: You must be logged in to the server (Unauthorized)
KUBECONFIG
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: https://localhost:6443
name: docker-for-desktop-cluster
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.us-east-1.eks.amazonaws.com
name: eksdemo
- cluster:
certificate-authority-data: REDACTED
server: https://api.k8s.domain.com
name: k8s
contexts:
- context:
cluster: eksdemo
user: aws
name: aws
- context:
cluster: k8s
user: k8s
name: devcap
- context:
cluster: docker-for-desktop-cluster
user: docker-for-desktop
name: docker-for-desktop
- context:
cluster: k8s
user: k8s
name: k8s
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- eksdemo
- -r
- arn:aws:iam::REDACTED:role/eks_role
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: k8sdev
- name: docker-for-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
- name: k8s
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
password: REDACTED
username: admin
- name: k8s-basic-auth
user:
password: REDACTED
username: admin
- name: k8s-token
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: REDACTED
STS Assume
AWS_PROFILE=k8sdev aws sts assume-role --role-arn arn:aws:iam::REDACTED:role/eks_role --role-session-name test
{
"Credentials": {
"AccessKeyId": "REDACTED",
"SecretAccessKey": "REDACTED",
"SessionToken": "REDACTED",
"Expiration": "2018-06-21T21:15:58Z"
},
"AssumedRoleUser": {
"AssumedRoleId": "REDACTED:test",
"Arn": "arn:aws:sts::REDACTED:assumed-role/eks_role/test"
}
}
exports
~/bin » echo $AWS_PROFILE
k8sdev
~/bin » echo $DEFAULT_ROLE
arn:aws:iam::REDACTED:role/eks_role
heptio-authenticator-aws token
~/bin » heptio-authenticator-aws token -i eksdemo
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"token":"k8s-aws-v1.REDACTED"}}
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 19
- Comments: 35 (8 by maintainers)
@bilby91 nice! Yeah I’m not sure of the precedence, but that sounds right. (For others:
unset AWS_ACCESS_KEY_ID,unset AWS_SECRET_ACCESS_KEYandunset AWS_SESSION_TOKENif you don’t want to use them).I’ll post this here for anyone else running into this, I realise the ticket is closed but hope to help someone.
Even after reading this whole thread, it still took me a good hour to figure out why my cluster is returning
error: the server doesn't have a resource type "svc"when callingkubectl get svc. Context is everything, if your setup is different the following probably won’t work, this is for folks who use SAML federated logins via a script that doessts assume-role-with-samlunder the hood (Azure AD, OneLogin, Okta), this results in temporary credentials which are typically stored in the~/.aws/credentialsfile. In many cases multiple accounts are used so there may be adev,prod, or other profile section in that file.Steps to take:
~/.aws/credentials.AWS_PROFILE=<profile> aws-iam-authenticator token -i <cluster_name>works. In my case, it returned a token without errors.unsetany lingeringAWS_*variables, this bit me, I didn’t think it mattered and didn’t realise it was messing things up.aws --profile=<profile> eks update-kubeconfig --name <cluster_name>, this creates a file at something like~/.kube/config.~/.kube/configto add theAWS_PROFILEenv variable, this should be the same profile you used to launch the cluster.kubectl get svcshould work. \o/Someone please help, will send funds: https://stackoverflow.com/questions/56227322/passing-eks-token-to-other-kubectl-cli-commands
I can get a token, using
aws eks get-token --cluster-name eks1but still can’t authenticate
will upvote the shit out of your answer if you can help thanks
Released! https://aws.amazon.com/about-aws/whats-new/2018/09/amazon-eks-simplifies-cluster-setup-with-update-kubeconfig-cli-command/
Okey, I think I have solved the issue. It seems that I had some hidden env vars for AWS that seem to have been overriding the
AWS_PROFILEoption. WouldAWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYhave more preference overAWS_PROFILEenv vars ?I seem to solve it with these commands:
Apparently, having these in the environment was blocking
kubectlfrom using theAWS_PROFILEfrom thekubeconfig.I created the profile like this:
aws configure --profile <profile-name>which creates a new profileAWS_PROFILEinstead of overwriting thedefaultprofile.I set the profile like this:
aws eks update-kubeconfig --region <region> --name <cluster-name> --profile <profile-name>Check you kubeconfig if the right profile is set:
kubectl config viewIssue: openshift okd cluster can run any command, oc status , oc get svc … error logs: oc status error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized) error: You must be logged in to the server (Unauthorized)
workaround:~ you need to login in your cluster first, but I don´t know why have to login first. $oc login then input the account/password which you create the cluster. then everything is working well
@BlackBsd
That’s exactly what I have:
Your kubeconfig is missing the arn to match though:
And whichever role you assumed when creating the cluster is the one you’ll want to be assuming in aws-azure-login and inside your kubeconfig… once you add all additional roles to the configMap you can use the others but initially you have to use the one that created it.
I believe you have hit one of the more confusing aspects of the user experience, but let me try to explain. Whatever role/user you used to create the cluster, is automatically granted system:masters. You should then use that user/assume that role to modify the authenticator configmap, and add any additional mappings for admins, devs, etc. Right now you cannot modify that admin user, but this is on our roadmap.
This is role that is the authenticator will assume before crafting the token, i.e. it is the identity of the user. This role arn should also be present in the configmap, where it should be mapped to a kubernetes user and groups.
This blog post might help, specifically section 3 (Configure Role Permissions via Kubernetes RBAC). The authenticator configmap and RBAC setup is similar regardless of what type of AWS identity you have, i.e. user, role, federated user, …
You can have as many roles as you want for users to assume, they just need to all be present in the authenticator configmap.
Is
arn:aws:iam::REDACTED:role/eks_roleregistered with your cluster? Or did you do the cluster creation with that role assumed?If not use the credentials of the user that created the cluster and remove that role from your
KUBECONFIGthen you can register that role with it.@dreampuf Can you Edit your
~/.aws/configto add the role_arn and MFA serial number into a new profile:Then if you specify the kubeconfig file like this (top part omitted):
This will actually use a different code path through the SDK and you will be prompted for an MFA.
@christopherhein yes but as I mentioned when all our users login they will get different roles in their Federated logins… Some will get
Developer,Operator,Sysadmin, etc. So creating through the console assuming one of these roles would make it so that a developer and a sysadmin couldn’tkubectlthe same cluster. What is the-rrole used for in theKUBECONFIG?I could use
eks_rolefor the cluster control plane I could create another role sayeks_adminfor users to assume with-r? (what perms does thiseks_adminrole need?) I could setup trust relationship oneks_admintoDeveloper,Operator,Sysadmin? Then everyone could control viakubectl?