aws-iam-authenticator: Roles with paths do not work when the path is included in their ARN in the aws-auth configmap

I have a role with an ARN that looks like this: arn:aws:iam::XXXXXXXXXXXX:role/gitlab-ci/gitlab-runner. My aws-auth configmap was as follow:

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: arn:aws:iam::XXXXXXXXXXXX:role/EKSWorkerNode
      username: system:node:{{EC2PrivateDNSName}}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::XXXXXXXXXXXX:role/EKSServiceWorker
      username: kubernetes-admin
      groups:
        - system:masters
    - rolearn: arn:aws:iam::XXXXXXXXXXXX:role/gitlab-ci/gitlab-runner
      username: gitlab-admin
      groups:
        - system:masters

I repeated got unauthorized errors from the cluster until I updated the rolearn to arn:aws:iam::XXXXXXXXXXXX:role/gitlab-runner. After that change my access worked as expected.

If it makes a difference, I’m using assume-role on our gitlab-runner, and using aws eks update-kubeconfig --region=us-east-1 --name=my-cluster to get kubectl configured.

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 86
  • Comments: 46

Commits related to this issue

Most upvoted comments

I’ve enjoyed my 6+ hours lost to this.

terraform workaround:

join("/", values(regex("(?P<prefix>arn:aws:iam::[0-9]+:role)/[^/]+/(?P<role>.*)", <role-arn>)))

I’m not sure this is still needed with v0.5.1.

Any news on this? This is quite a weird behavior and hard to detect as an error.

I was able to reproduce this issue. I created two roles: K8s-Admin and K8s-Admin-WithPath, I created the roles using the following commands:

  aws iam create-role \
  --role-name K8s-Admin \
  --description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
  --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::<account id>:root"},"Action":"sts:AssumeRole","Condition":{}}]}' \
  --output text \
  --query 'Role.Arn'

  aws iam create-role \
  --role-name K8s-Admin-WithPath \
  --path "/kubernetes/" \
  --description "Kubernetes administrator role (for AWS IAM Authenticator for Kubernetes)." \
  --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":"arn:aws:iam::<account id>:root"},"Action":"sts:AssumeRole","Condition":{}}]}' \
  --output text \
  --query 'Role.Arn'

Mapped them to the cluster with:

eksctl create iamidentitymapping --cluster basic-demo --arn arn:aws:iam::<account id>:role/K8s-Admin--group system:masters --username iam-admin

eksctl create iamidentitymapping --cluster basic-demo --arn arn:aws:iam::<accound id>:role/kubernetes/K8s-Admin-WithPath --group system:masters --username iam-admin-withpath

Then attached the AWS ReadOnly policy to both roles. Next, I created two AWS CLI profiles sandbox-k8s-admin and sandbox-k8s-admin-withpath specifying the rolearn options to trigger an assume role. After creating the roles, I updated my local kubeconfig:

eksctl utils write-kubeconfig --cluster=basic-demo --profile=sandbox-k8s-admin --set-kubeconfig-context --region=us-east-2

kubectl get nodes
# returned list of nodes, expected

Then switched over to the role with the path

eksctl utils write-kubeconfig --cluster=basic-demo --profile=sandbox-k8s-admin-withpath --set-kubeconfig-context --region=us-east-2

kubectl get nodes
# error: You must be logged in to the server (Unauthorized)

Ahh…this explains our issue when testing with AWS SSO-created roles too. See the issue referenced in this document. This has been a problem for a quite a while (at least 14 months).

https://aws.amazon.com/blogs/opensource/integrating-ldap-ad-users-kubernetes-rbac-aws-iam-authenticator-project/

Pertinent passage: For the rolearn be sure to remove the /aws-reserved/sso.amazonaws.com/ from the rolearn url, otherwise the arn will not be able to authorize as a valid user.

When we stumbled across this I assumed it was something about the SSO role but based on this issue it’s probably the path.

/remove-lifecycle stale

A fix could be to have iam:GetRole permissions and “lookup” the full role info by “short” role name.

https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/get-role.html

I could create a sample PR if that helps.

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

Between #333, #268, #153 and #98 - would be good to get duplicates closed and it tracked in one place

This caught me too today what a PIA indeed… Can confirm that instance role with a path will not be able to auth against the cluster - hopefully this gets fixed soon.

Jan 28 05:05:01 ip-10-31-8-66.us-west-1.compute.internal kubelet[3907]: E0128 05:05:01.251418    3907 kubelet_node_status.go:92] Unable to register node "ip-10-31-8-66.us-west-1.compute.internal" with API server: Unauthorized

Adding this in the hope it saves someone else a few hours of their life.

Any update ? using paths in IAM is a “bad practice” or not ?

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

terraform workaround:

join("/", values(regex("(?P<prefix>arn:aws:iam::[0-9]+:role)/[^/]+/(?P<role>.*)", <role-arn>)))

This didn’t work for us on ARNs that contain nested “directories” in the path (e.g. arn:aws:iam::123456789012:role/with/nested/directories). Here’s what did work:

replace(<role-arn>, "//.*//", "/")

The issue seems to be here: https://github.com/kubernetes-sigs/aws-iam-authenticator/blob/85e50980d9d916ae95882176c18f14ae145f916f/pkg/arn/arn.go#L43

Not sure why, but path is dropped from the arn for some reason when doing a match.

I’m not sure what you mean by that @sftim

The issue here is that the aws-auth configMap expects a roleArn, but you have to mangle the actual roleArn for it to work. When I submitted this, the caveat wasn’t documented (to my knowledge). Now this document seems to mention it:

https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html

Important

The role ARN cannot include a path. The format of the role ARN must be arn:aws:iam::<123456789012>:role/<role-name>. For more information, see aws-auth ConfigMap does not grant access to the cluster.

IMO, that means the roleArn field in the configMap isn’t the roleArn.

If the authentication works without the path, I would assume its easy for the logic that performs the authentication to handle the ARN with or without the path. That would save new users, who enter the actual roleArn into the configMap, from running into this odd behavior… without breaking functionality for everyone that has already entered a path-less roleArn in their config as a workaround.

We don’t use EKS, but have had this issue with 1.12 and 1.14.6 with aws-iam-authenticator. If you edit the configmap to remove the /gitlab-ci portion, and restart the pods, you will likely find that access works.

My co-worker and I suspect that is because of the way that sts returns output for assumed role session arns.

We have a role arn:aws:iam::000000000000:/role/bosun/bosun_deploy that we use for cluster administration of our kops created clusters.

If you assume the role, and run aws sts get-caller-identity, we get the following:

{
    "UserId": "<redacted-AKID>:<redacted-userid>",
    "Account": "000000000000",
    "Arn": "arn:aws:sts::000000000000:assumed-role/bosun_deploy/<redacted-userid>"
}

I wish this was fixed, as of now, I’m not sure what to do other than creating a role with a shortened path and switch to it.

I suppose one can also just edit the role that gets input to the configmap itself.