aws-iam-authenticator: error: the server doesn't have a resource type "svc"

Steps taken

  1. Created an role as per https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-prereqs
  2. Created a K8S cluster as per https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-create-cluster
  3. Created an IAM user with programmatic access with the following permissions
  • AmazonEKSClusterPolicy
  • AmazonEKSWorkerNodePolicy
  • AmazonEKSServicePolicy
  • AmazonEKS_CNI_Policy
  • AmazonEC2ReadOnlyAccess

Note: the original IAM user we had in .aws/credentials used AmazonEC2ReadOnlyAccess. I added that in order not to break existing scripts.

  1. Upgraded AWS CLI
% aws --version
aws-cli/1.16.30 Python/2.7.14 Linux/4.14.72-68.55.amzn1.x86_64 botocore/1.12.20
  1. Installed kubectl as per https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-configure-kubectl
% kubectl version --short --client
Client Version: v1.10.3
  1. Installed aws-iam-authenticator

  2. Generated kubeconfig

% aws sts get-caller-identity
XXXXXXXXX	arn:aws:iam::XXXXXXXXX:user/bastion	XXXXXXXXXXXXXXXXXXX

% aws eks update-kubeconfig --name test
Updated context arn:aws:eks:us-east-1:XXXXXXXXX:cluster/test in /home/ec2-user/.kube/config

% kubectl get svc
error: the server doesn't have a resource type "svc"

Debugging

  1. Fetch token
% aws-iam-authenticator token -i wizenoze-test 
{
   "kind":"ExecCredential",
   "apiVersion":"client.authentication.k8s.io/v1alpha1",
   "spec":{

   },
   "status":{
      "token":"k8s-aws-v1.XXX"
   }
}
  1. Verify token
% aws-iam-authenticator verify -i test -t k8s-aws-v1.XXX

&{ARN:arn:aws:iam::XXXXXXXXX:user/bastion CanonicalARN:arn:aws:iam::XXXXXXXXX:user/bastion AccountID:XXXXXXXXX UserID:XXXXXXXXXXXXXXXXXXX SessionName:}

Could you please help me in finding out what’s wrong?

Thanks, László

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 26
  • Comments: 47 (3 by maintainers)

Commits related to this issue

Most upvoted comments

I was experiencing this until I opened a new tab in Terminal

Possible solution if you created the cluster in the UI

If you created the cluster in the UI, it’s possible the AWS root user created the cluster. According to the docs, "When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master) permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. "

You’ll need to first login to the AWS CLI as the root user or the user who created the cluster in order to update the permissions of the IAM user you want to have access to the cluster.

  1. You’ll need to get an access key for the root user or the user you were logged in as when you created the cluster and put this info in .aws/credentials under the default user. You can do this using the command aws configure

    Now kubectl get svc works, since you’re logged in as the root user that initially created the cluster.

  2. Apply the aws-auth ConfigMap to the cluster. Follow step 2 from these docs, using the NodeInstanceRole value you got as the Output from Step 3: Launch and Configure Amazon EKS Worker Nodes

  3. To add a non-root IAM user or role to an Amazon EKS cluster, follow step 3 from these docs. Edit the configmap/aws-auth and add other users that need kubectl access in the mapUsers section.

  4. Run aws configure again and add the access key info from your non-root user.

Now you can access your cluster from the AWS CLI and using kubectl!

I was having the same issue. Doc is bit misleading, make sure your aws cli is configured with the key and secret from the user which created EKS cluster

Like many others here, we use STS and assumed roles with profiles to access our AWS account(s). We encountered the same problem(s) that others here have. Thanks to @GeoffMillerAZ, I was able to get this working by doing the following:

  • Provision a new EKS cluster using aws eks create-cluster and specifying the correct profile using --profile blah
  • Run aws eks update-kubeconfig --name clustername --profile blah to create a shell ~/.kube/config
  • Modify ~/.kube/config and add:
      env:
      - name: AWS_PROFILE
        value: "blah"

After this, I was able to run `kubectl get svc’:

➜  kubectl get svc
Assume Role MFA token code: 123456
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   172.20.0.1   <none>        443/TCP   13m

This is mostly documented here: https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html.

I would have to agree with others that the documentation (especially the “Getting Started” documentation) is not clear. Using STS with assumed roles is a very common pattern for many customers, and this should be addressed more eloquently.

My problem was related to not having my IAM user keys set as default at ~/.aws/credentials, I was using the --profile flag to run AWS CLI commands as I normally do but it doesn’t seems to work for this case

update:

https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html

apiVersion: v1
clusters:
- cluster:
    server: <endpoint-url>
    certificate-authority-data: <base64-encoded-ca-cert>
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "<cluster-name>"
        # - "-r"
        # - "<role-arn>"
      env:
        - name: AWS_PROFILE
          value: "staging_ekscreator"

adding this:

      env:
        - name: AWS_PROFILE
          value: "staging_ekscreator"

fixed my issue. So maybe on cluster creation you can just use a manually created IAM and then import it into terraform after you use this user to setup your user permissions. Then you can just delete this IAM user to keep your cloud tidy and secure. I’m trying that next.

We are also seeing this exact problem.

This is the output we get:

kubectl get svc --v=10
I1016 16:12:26.421644   49924 loader.go:359] Config loaded from file /Users/pivotal/.kube/config
I1016 16:12:26.421874   49924 round_trippers.go:386] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.1 (da
rwin/amd64) kubernetes/b1b2997" 'https://XXXXX.sk1.eu-west-1.eks.amazonaws.com/api?timeout=32s'
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I1016 16:12:31.826447   49924 round_trippers.go:405] GET https://XXXXX.sk1.eu-west-1.eks.amazonaws.com/api?timeout
=32s  in 5404 milliseconds
I1016 16:12:31.826470   49924 round_trippers.go:411] Response Headers:
I1016 16:12:31.826519   49924 cached_discovery.go:111] skipped caching discovery info due to Get https://XXXXX.sk1
.eu-west-1.eks.amazonaws.com/api?timeout=32s: getting credentials: exec: exit status 1
I1016 16:12:31.826652   49924 round_trippers.go:386] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubectl/v1.11.1 (da
rwin/amd64) kubernetes/b1b2997" 'https://XXXXX.sk1.eu-west-1.eks.amazonaws.com/api?timeout=32s'
could not get token: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors

We additionally assumed the role we created for our EKS cluster using

aws eks update-kubeconfig --name eksgism --role-arn arn:aws:iam::XXXXX:role/eksgism-role

We too can seemingly successfully use the aws-iam-authenticator

aws-iam-authenticator token -i eksgism -r arn:aws:iam::XXXXX:role/eksgism-role
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1alpha1","spec":{},"status":{"token":"k8s-aws-v1.XXXXX}

aws-iam-authenticator verify -i eksgism -t k8s-aws-v1.XXXXX
&{ARN:arn:aws:sts::XXXXX:assumed-role/eksgism-role/XXXXX CanonicalARN:arn:aws:iam::XXXXX:role/eksgism-role AccountID:XXXXX UserID:XXXXX SessionName:XXXXX}

…And opening a new tab did not work 😦

Cheers, Sam & @teddyking

I’m here at reinvent trying to get an audience with anyone on the EKS team who can help.

From AWS Docs: https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator (with system:master permissions. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more information, see Managing Users or IAM Roles for your Cluster. Also, the AWS IAM Authenticator for Kubernetes uses the AWS SDK for Go to authenticate against your Amazon EKS cluster. If you use the console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK credential chain when you are running kubectl commands on your cluster.

@jaydipdave Who are you at the CLI when getting your token (i.e. aws sts get-caller-identity)? Is this the same user that created the cluster? I’m at re:Invent right now. Hoping to find some EKS folks to talk to about the user experience here.

I’m having the exact same issue. Kubectl logs say

I1123 13:09:45.789218   28300 request.go:942] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}

I created my cluster via the AWS console using one user, and I have another user (programmatic only) for the command line. The programmatic user has full EKS permissions on that cluster.

aws sts get-caller-identity

This outputs my programmatic user correctly. However:

~ aws-iam-authenticator token -i XXXX 
[copy token]

~ aws-iam-authenticator verify -i test -t TOKEN
could not verify token: sts getCallerIdentity failed: error from AWS (expected 200, got 403)

I spoke to a gentleman at the AWS EKS booth at Kubecon. I explained this problem and how I typically use credentials that are temporary from STS due to MFA and role assumption. He said they are getting a lot of complaints on this problem and that they are trying to come up with a good solution. I shared that what I had done was to make an IAM user in that specific account for the purpose of account creation and that I intended to delete the user after I setup permissions the way I needed on RBAC. He said that is a good solution except to not delete the account but to keep it as a break-glass account in case problems arose.

He also mentioned they may be working on a solution to expose master logs to you so you can see and troubleshoot without as much guessing.

Getting the root user credentials is not possible for everyone. In my case, I am using centrally managed role by the organization. I have admin access but don’t have root credentials.

I had the same issue. After looking around a bit, i’ve found that running this command: aws eks update-kubeconfig --name xxxxxxxx --role-arn arn:aws:iam::xxxxxxxxx:role/EKS-Manager kubectl get svc Response: could not get token: AccessDenied: Access denied status code: 403, request id: c86f28cf-f16c-11e8-a02d-bf81fbdd8d60 Unable to connect to the server: getting credentials: exec: exit status 1

Then I’ve found that you can connect to the kubectl cluster with the same user that created it from the console.

So after creating the kube cluster with the same user that is in my aws creds, running aws eks update-kubeconfig --name xxxxxxxx kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21m

narup, yes, you might be on to something. I created the cluster from the GUI using SSO credentials which would have used temporary access/secret keys. All the kubectl work was done using my permanent access/secret key pair. I will have to go back and validate this again using only the CLI with my access/secret.

I followed https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html to add ARN of my colleagues who wanted to manage the cluster:

kubectl edit -n kube-system configmap/aws-auth

Might you guys be using Terraform at all? I know it’s still just an API call for both. I’m seeing this when using Terraform that assumes a role, and regardless of if I assume that role or not, I can’t access the control plane.