aws-iam-authenticator: error: the server doesn't have a resource type "svc"
Steps taken
- Created an role as per https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-prereqs
- Created a K8S cluster as per https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-create-cluster
- Created an IAM user with programmatic access with the following permissions
- AmazonEKSClusterPolicy
- AmazonEKSWorkerNodePolicy
- AmazonEKSServicePolicy
- AmazonEKS_CNI_Policy
- AmazonEC2ReadOnlyAccess
Note: the original IAM user we had in .aws/credentials used AmazonEC2ReadOnlyAccess.
I added that in order not to break existing scripts.
- Upgraded AWS CLI
% aws --version
aws-cli/1.16.30 Python/2.7.14 Linux/4.14.72-68.55.amzn1.x86_64 botocore/1.12.20
- Installed kubectl as per https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-configure-kubectl
% kubectl version --short --client
Client Version: v1.10.3
-
Installed aws-iam-authenticator
-
Generated kubeconfig
% aws sts get-caller-identity
XXXXXXXXX arn:aws:iam::XXXXXXXXX:user/bastion XXXXXXXXXXXXXXXXXXX
% aws eks update-kubeconfig --name test
Updated context arn:aws:eks:us-east-1:XXXXXXXXX:cluster/test in /home/ec2-user/.kube/config
% kubectl get svc
error: the server doesn't have a resource type "svc"
Debugging
- Fetch token
% aws-iam-authenticator token -i wizenoze-test
{
"kind":"ExecCredential",
"apiVersion":"client.authentication.k8s.io/v1alpha1",
"spec":{
},
"status":{
"token":"k8s-aws-v1.XXX"
}
}
- Verify token
% aws-iam-authenticator verify -i test -t k8s-aws-v1.XXX
&{ARN:arn:aws:iam::XXXXXXXXX:user/bastion CanonicalARN:arn:aws:iam::XXXXXXXXX:user/bastion AccountID:XXXXXXXXX UserID:XXXXXXXXXXXXXXXXXXX SessionName:}
Could you please help me in finding out what’s wrong?
Thanks, László
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 26
- Comments: 47 (3 by maintainers)
Commits related to this issue
- Revert "Add suspended_processes attributes to autoscaling_group (#153)" (#157) This reverts commit c8cc60f46d2e164964d88edfbaa62b2ad85eee26. — committed to joanayma/aws-iam-authenticator by max-rocket-internet 6 years ago
I was experiencing this until I opened a new tab in Terminal
Possible solution if you created the cluster in the UI
If you created the cluster in the UI, it’s possible the AWS
rootuser created the cluster. According to the docs, "When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master) permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. "You’ll need to first login to the AWS CLI as the
rootuser or the user who created the cluster in order to update the permissions of the IAM user you want to have access to the cluster.You’ll need to get an access key for the root user or the user you were logged in as when you created the cluster and put this info in
.aws/credentialsunder the default user. You can do this using the commandaws configureNow
kubectl get svcworks, since you’re logged in as the root user that initially created the cluster.Apply the aws-auth ConfigMap to the cluster. Follow step 2 from these docs, using the
NodeInstanceRolevalue you got as theOutputfrom Step 3: Launch and Configure Amazon EKS Worker NodesTo add a non-root IAM user or role to an Amazon EKS cluster, follow step 3 from these docs. Edit the
configmap/aws-authand add other users that needkubectlaccess in themapUserssection.Run
aws configureagain and add the access key info from your non-root user.Now you can access your cluster from the AWS CLI and using kubectl!
I was having the same issue. Doc is bit misleading, make sure your aws cli is configured with the key and secret from the user which created EKS cluster
Like many others here, we use STS and assumed roles with profiles to access our AWS account(s). We encountered the same problem(s) that others here have. Thanks to @GeoffMillerAZ, I was able to get this working by doing the following:
aws eks create-clusterand specifying the correct profile using--profile blahaws eks update-kubeconfig --name clustername --profile blahto create a shell~/.kube/config~/.kube/configand add:After this, I was able to run `kubectl get svc’:
This is mostly documented here: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html.
I would have to agree with others that the documentation (especially the “Getting Started” documentation) is not clear. Using STS with assumed roles is a very common pattern for many customers, and this should be addressed more eloquently.
My problem was related to not having my IAM user keys set as default at
~/.aws/credentials, I was using the --profile flag to run AWS CLI commands as I normally do but it doesn’t seems to work for this caseupdate:
https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html
adding this:
fixed my issue. So maybe on cluster creation you can just use a manually created IAM and then import it into terraform after you use this user to setup your user permissions. Then you can just delete this IAM user to keep your cloud tidy and secure. I’m trying that next.
We are also seeing this exact problem.
This is the output we get:
We additionally assumed the role we created for our EKS cluster using
We too can seemingly successfully use the aws-iam-authenticator
…And opening a new tab did not work 😦
Cheers, Sam & @teddyking
I’m here at reinvent trying to get an audience with anyone on the EKS team who can help.
From AWS Docs: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more information, see Managing Users or IAM Roles for your Cluster. Also, the AWS IAM Authenticator for Kubernetes uses the AWS SDK for Go to authenticate against your Amazon EKS cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.
@jaydipdave Who are you at the CLI when getting your token (i.e.
aws sts get-caller-identity)? Is this the same user that created the cluster? I’m at re:Invent right now. Hoping to find some EKS folks to talk to about the user experience here.I’m having the exact same issue. Kubectl logs say
I created my cluster via the AWS console using one user, and I have another user (programmatic only) for the command line. The programmatic user has full EKS permissions on that cluster.
This outputs my programmatic user correctly. However:
I spoke to a gentleman at the AWS EKS booth at Kubecon. I explained this problem and how I typically use credentials that are temporary from STS due to MFA and role assumption. He said they are getting a lot of complaints on this problem and that they are trying to come up with a good solution. I shared that what I had done was to make an IAM user in that specific account for the purpose of account creation and that I intended to delete the user after I setup permissions the way I needed on RBAC. He said that is a good solution except to not delete the account but to keep it as a break-glass account in case problems arose.
He also mentioned they may be working on a solution to expose master logs to you so you can see and troubleshoot without as much guessing.
Getting the root user credentials is not possible for everyone. In my case, I am using centrally managed role by the organization. I have admin access but don’t have root credentials.
I had the same issue. After looking around a bit, i’ve found that running this command:
aws eks update-kubeconfig --name xxxxxxxx --role-arn arn:aws:iam::xxxxxxxxx:role/EKS-Managerkubectl get svcResponse:could not get token: AccessDenied: Access deniedstatus code: 403, request id: c86f28cf-f16c-11e8-a02d-bf81fbdd8d60Unable to connect to the server: getting credentials: exec: exit status 1Then I’ve found that you can connect to the kubectl cluster with the same user that created it from the console.
So after creating the kube cluster with the same user that is in my aws creds, running
aws eks update-kubeconfig --name xxxxxxxxkubectl get svcnarup, yes, you might be on to something. I created the cluster from the GUI using SSO credentials which would have used temporary access/secret keys. All the kubectl work was done using my permanent access/secret key pair. I will have to go back and validate this again using only the CLI with my access/secret.
I followed https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html to add ARN of my colleagues who wanted to manage the cluster:
kubectl edit -n kube-system configmap/aws-authMight you guys be using Terraform at all? I know it’s still just an API call for both. I’m seeing this when using Terraform that assumes a role, and regardless of if I assume that role or not, I can’t access the control plane.