terraform-provider-kubernetes: Error `The configmap "aws-auth" does not exist` when deploying to AWS EKS
Terraform Version, Provider Version and Kubernetes Version
Terraform v1.1.9
on linux_amd64
+ provider registry.terraform.io/hashicorp/aws v3.75.1
+ provider registry.terraform.io/hashicorp/cloudinit v2.2.0
+ provider registry.terraform.io/hashicorp/external v2.2.2
+ provider registry.terraform.io/hashicorp/kubernetes v2.11.0
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.1.3
+ provider registry.terraform.io/hashicorp/tls v3.4.0
eks module ~> 18.0
Affected Resource(s)
Terraform Configuration Files
This is my .tf file:
data "aws_eks_cluster" "default" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "default" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
exec {
api_version = "client.authentication.k8s.io/v1alpha1"
args = ["eks", "get-token", "--cluster-name", var.cluster_name, "--profile", var.customer-var.environment]
command = "aws"
}
# token = data.aws_eks_cluster_auth.default.token
}
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 18.0"
cluster_name = var.cluster_name
cluster_version = var.cluster_version
cluster_endpoint_private_access = true
cluster_endpoint_public_access = false
cluster_enabled_log_types = ["api", "audit", "authenticator", "controllerManager", "scheduler"]
cluster_addons = {
coredns = {
resolve_conflicts = "OVERWRITE"
}
kube-proxy = {}
vpc-cni = {
resolve_conflicts = "OVERWRITE"
}
}
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
cluster_encryption_config = [{
provider_key_arn = var.kms_key_id
resources = ["secrets"]
}]
# EKS Managed Node Group(s)
eks_managed_node_group_defaults = {
disk_size = 50
instance_types = ["c5.large"]
}
eks_managed_node_groups = {
"${var.ng1_name}" = {
min_size = var.ng1_min_size
max_size = var.ng1_max_size
desired_size = var.ng1_desired_size
instance_types = var.ng1_instance_types
capacity_type = "ON_DEMAND"
update_config = {
max_unavailable_percentage = 50
}
tags = var.tags
}
}
node_security_group_additional_rules = var.ng1_additional_sg_rules
# aws-auth configmap
manage_aws_auth_configmap = true
tags = var.tags
}
Debug Output
Panic Output
│ Error: The configmap "aws-auth" does not exist
│
│ with module.eks-cluster.module.eks.kubernetes_config_map_v1_data.aws_auth[0],
│ on .terraform/modules/eks-cluster.eks/main.tf line 431, in resource "kubernetes_config_map_v1_data" "aws_auth":
│ 431: resource "kubernetes_config_map_v1_data" "aws_auth" {
Steps to Reproduce
Expected Behavior
EKS Cluster and nodegroup deployment
Actual Behavior
Cluster deploys, but nodegroups are not created nor registered to the cluster
Important Factoids
Deploying to AWS EKS
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 30
- Comments: 18 (1 by maintainers)
For anyone landing here - this issue is not related to the Kubernetes provider
terraform-aws-eksmodule we have provided both themanage_aws_auth_configmapandcreate_aws_auth_configmapbecause of this (and for backwards compatibility support). If you are creating a new cluster, you *should be ok with setting both of these to true. *HOWEVER - please understand that it is not foolproof and there is a race condition. If the EKS managed node group or Fargate profile create the configmap before Terraform, it will fail with the error message that the configmap already exists. Conversely, if you only havemanage_aws_auth_configmapand are relying on EKS managed node group or Fargate profiles to create the configmap, you most likely will see the error message about the congifmap not existing yetIn short:
manage_aws_auth_configmap = trueandcreate_aws_auth_configmap = truemanage_aws_auth_configmap = trueandcreate_aws_auth_configmap = truebecause one will NOT be automatically created for youmanage_aws_auth_configmapTried this route and still came up with the same error on creating a net new EKS cluster 😭
Dirty override that works for me:
manage_aws_auth_configmap = falseandcreate_aws_auth_configmap = falsemanage_aws_auth_configmaptotrueand set what you want.same problem.
set both
manage_aws_auth_configmapandcreate_aws_auth_configmapto false during EKS creation, after creation successful, setmanage_aws_auth_configmapto true, still same issue, and check the EKS cluster, the aws_auth already existLooking at the Kubernetes APIs, if
force = trueonkubernetes_config_map_v1_dataresource, shouldn’t the provider first be trying to do aGET /api/v1/namespaces/{namespace}/configmaps/{name}then if a404do aPOST /api/v1/namespaces/{namespace}/configmaps/{name}. If theGETis a200it should then do aPUT /api/v1/namespaces/{namespace}/configmaps/{name}?Isn’t that the whole point of the
force = trueonkubernetes_config_map_v1_data?Hey guys! I think it looks like a lot of work to do here. terraform module has to do the following:
terraform output