aws-cdk: [aws-eks] Can't log into fresh EKS cluster with SAML mastersRole

I used the CDK to create an EKS cluster with an assumed role and cannot login even though I made a role that I can assume the master role. Unlike https://github.com/aws/aws-cdk/issues/3752 I set the mastersRole.

I followed the example here: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-eks-readme.html

Reproduction Steps

Initially I thought setting the mastersRole should be enough:

// admin role
const clusterAdmin = iam.Role.fromRoleArn(this, 'AdminRole',
     "arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team");

 const cluster = new eks.Cluster(this, 'KubeFlowCluster', {
      defaultCapacity: 3,
      defaultCapacityInstance: new ec2.InstanceType('t3.large'),
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],

    });

I thought that should also set up aws auth mapping in EKS but I have since added the following which also didn’t help:

cluster.awsAuth.addMastersRole(clusterAdmin)

In fact this wasn’t necessary and just added a duplicate master role entry but I wanted to illustrate what I tried.

Error Log

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get cluster NAME REGION KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 eu-west-1

(base) ➜ kubeflow-eks git:(master) ✗ eksctl get iamidentitymapping --cluster KubeFlowCluster6318BD13-370645a8943946f49942987f1352f2c3 Error: getting auth ConfigMap: Unauthorized

Environment

  • **CLI Version :1.27.0 (build a98c0b3)
  • **Framework Version:node v11.10.1
  • **OS :OS X
  • **Language :typescript

Other

This is the CF template section generated by CDK for the awsauth:


"KubeFlowClusterAwsAuthmanifest4ABE9919": {
      "Type": "Custom::AWSCDK-EKS-KubernetesResource",
      "Properties": {
        "ServiceToken": {
          "Fn::GetAtt": [
            "awscdkawseksKubectlProviderNestedStackawscdkawseksKubectlProviderNestedStackResourceA7AEBA6B",
            "Outputs.KubeflowEksDevawscdkawseksKubectlProviderframeworkonEventA20B6922Arn"
          ]
        },
        "Manifest": {
          "Fn::Join": [
            "",
            [
              "[{\"apiVersion\":\"v1\",\"kind\":\"ConfigMap\",\"metadata\":{\"name\":\"aws-auth\",\"namespace\":\"kube-system\"},\"data\":{\"mapRoles\":\"[{\\\"rolearn\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"username\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"groups\\\":[\\\"system:masters\\\"]},{\\\"rolearn\\\":\\\"",
              {
                "Fn::GetAtt": [
                  "KubeFlowClusterDefaultCapacityInstanceRoleE883FDD5",
                  "Arn"
                ]
              },
              "\\\",\\\"username\\\":\\\"system:node:{{EC2PrivateDNSName}}\\\",\\\"groups\\\":[\\\"system:bootstrappers\\\",\\\"system:nodes\\\"]},{\\\"rolearn\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"username\\\":\\\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\\\",\\\"groups\\\":[\\\"system:masters\\\"]}]\",\"mapUsers\":\"[]\",\"mapAccounts\":\"[]\"}}]"
            ]
          ]
        },

It may not be clear but it seems the config map isn’t correct. It appears that the mapRoles array is array in a string instead of an array object.

apiVersion: v1
data:
  mapAccounts: '[]'
  mapRoles: '[{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]},{"rolearn":"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH","username":"system:node:{{EC2PrivateDNSName}}","groups":["system:bootstrappers","system:nodes"]},{"rolearn":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","username":"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team","groups":["system:masters"]}]'
  mapUsers: '[]'
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"mapAccounts":"[]","mapRoles":"[{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/KubeflowEks-Dev-KubeFlowClusterDefaultCapacityInst-1SBZV2PTF6QIH\",\"username\":\"system:node:{{EC2PrivateDNSName}}\",\"groups\":[\"system:bootstrappers\",\"system:nodes\"]},{\"rolearn\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"username\":\"arn:aws:iam::674300753731:role/CimpressADFS/vistaprint/aws-vbumodelscoring-management-team\",\"groups\":[\"system:masters\"]}]","mapUsers":"[]"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-auth","namespace":"kube-system"}}
  creationTimestamp: "2020-03-08T14:19:08Z"
  name: aws-auth
  namespace: kube-system
  resourceVersion: "4538"
  selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
  uid: c65c4c0b-6147-11ea-a6b1-02aa720c17c2

This is 🐛 Bug Report

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 20 (6 by maintainers)

Most upvoted comments

Based upon experimentation, I have found it works if I do two things:

  • create a role rather than use the SAML role directly
  • Setting the aws auth mapping before declaring the node group
const clusterAdmin = new iam.Role(this, `eks-cluster-admin-${id}`, {
      assumedBy: new iam.AccountRootPrincipal(),
    });

const cluster = new eks.Cluster(this, "FeastCluster", {
      defaultCapacity: 0,
      mastersRole: clusterAdmin,
      vpc: vpc,
      vpcSubnets: [{ subnets: vpc.privateSubnets }],
    });

cluster.awsAuth.addMastersRole(clusterAdmin);

cluster.addNodegroup("NGDefault", {
      instanceType: new ec2.InstanceType("t3.large"),
      diskSize: 100,
      minSize: 3,
      maxSize: 6,
    });

@dr3s I can email you the script