karpenter-provider-aws: Could not launch node, launching instances, with fleet error(s), UnauthorizedOperation: You are not authorized to perform this operation.

Is an existing page relevant? https://karpenter.sh/v0.6.5/getting-started/getting-started-with-terraform/

What karpenter features are relevant? aws_iam_instance_profile

How should the docs be improved?

resource "aws_iam_instance_profile" "karpenter" {
  name = "KarpenterNodeInstanceProfile-${var.cluster_name}"
  role = module.eks.worker_iam_role_name
}

From the above example, I have to assign for instance profile worker’s IAM role name. The problem is, that in the latest EKS module version worker_iam_role_name output does not exist anymore.

I am not sure what value I have to use here, because I don’t know what permission need karpenter for aws_iam_instance_profile

Any suggestions?

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave “+1” or “me too” comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 12
  • Comments: 33 (15 by maintainers)

Most upvoted comments

@midestefanis if you update your IRSA role to the following, this should start working for your use case now:

module "karpenter_irsa" {
  source  = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  version = "4.19.0" # <= available starting in 4.19.0

  role_name                          = "karpenter-controller-${local.cluster_name}"
  attach_karpenter_controller_policy = true

  karpenter_tag_key               = "karpenter.sh/discovery/${module.eks.cluster_id}" # <= this
  karpenter_controller_cluster_id = module.eks.cluster_id
  karpenter_controller_node_iam_role_arns = [
    module.eks.eks_managed_node_groups["initial"].iam_role_arn
  ]

  oidc_providers = {
    ex = {
      provider_arn               = module.eks.oidc_provider_arn
      namespace_service_accounts = ["karpenter:karpenter"]
    }
  }
}

FYI, it seems that Karpenter are not applying default tags to AWS resources per the documentation outlined here. This is relevant as the condition in the Terraform Karpenter IRSA module could instead target one of our default tags.

I’m working on a fix now.

Hey @midestefanis , I think I see the issue here. Notice in the Karpenter IRSA Terraform module there is a condition which is expecting a tag on resources being used by the RunInstances action.

The “tags” key in your Provisioner spec must be the same, otherwise the condition will not match. If you modify your provisioner to instead use the below, it should work as expected:

tags:
      karpenter.sh/discovery: ${var.cluster_name}

@bryantbiggs , Would it make sense to make the tags configurable in the Karpenter IRSA Terraform module instead of setting it as a static value?

#1332 was merged yesterday, which updates the Karpenter Terraform Getting Started guide so it uses v18+ of the Terraform EKS module. Also, the Karpenter IRSA modules has been updated with some fixes. Thanks to @bryantbiggs for his hard work on this PR!

@cradules, Any opposition to closing out this issue? Any other questions/problems you would like to bring up?

Sorry, I did not have the time to work on this issue, for the last few days. I hope next week, not to be so busy and I will have the chance to dig some more, especially since I have some starting points that emerged from @shane-snyder input, and I want to thank him for that!

Similar issue on my end. Using eks module (latest version) and iam-for-service-accounts-eks module. I will say while troubleshooting, I can get it working by removing the below condition from the controller policy if that’s any help.

        "Condition": {
            "StringEquals": {
                "ec2:ResourceTag/karpenter.sh/discovery": "cluster-name"
            }
        },

I managed to workaround by adding IAM policy manually using aws_iam_role_policy.

With terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks, I have attach_karpenter_controller_policy = false

Similar issue on my end. Using eks module (latest version) and iam-for-service-accounts-eks module. I will say while troubleshooting, I can get it working by removing the below condition from the controller policy if that’s any help.

        "Condition": {
            "StringEquals": {
                "ec2:ResourceTag/karpenter.sh/discovery": "cluster-name"
            }
        },

Update:

By manually assigning the AutoScalingFullAccess to role/default-eks-node-group-20220309132314999100000002 I am able to bring nodes to the cluster, but they remain on NotReady state:

kubectl  get nodes
NAME                                            STATUS     ROLES    AGE     VERSION
ip-192-168-143-76.us-east-2.compute.internal    NotReady   <none>   3m5s
ip-192-168-146-173.us-east-2.compute.internal   NotReady   <none>   9m13s
ip-192-168-56-216.us-east-2.compute.internal    Ready      <none>   5h5m    v1.21.5-eks-9017834

I did attach the vpc_cni_policy to karpenter_irsa

module "karpenter_irsa" {
  source                             = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
  role_name                          = "karpenter-controller-${var.eks-cluster-name}"

  attach_karpenter_controller_policy = true
  attach_cluster_autoscaler_policy = true
  attach_ebs_csi_policy = true
  attach_node_termination_handler_policy = true
  attach_load_balancer_controller_policy = true
  attach_vpc_cni_policy = true
  attach_external_dns_policy = true
------