terraform-provider-eksctl: Changing the spec does not trigger an update.

I have to following main.tf

resource "eksctl_cluster" "preprod" {
  eksctl_bin = "eksctl"
  eksctl_version = "0.29.2"
  name = "${local.resource_prefix}-eks-cluster"
  region = var.region
  vpc_id = var.vpc_id
  version = "1.14"

  spec = templatefile(local.config_file, merge(var.eks_config_placeholders, {
    __node_group_role_arn__ = aws_iam_role.node-group-role.arn
    __node_group_role_profile_arn__ = aws_iam_instance_profile.node-group-role-profile.arn
    __cluster_role_arn__ = aws_iam_role.cluster-role.arn
    __project_tags__ : indent(6, yamlencode(var.project_tags))
  }))

  depends_on = [
    aws_iam_instance_profile.cluster-role-profile,
    aws_iam_instance_profile.node-group-role-profile
  ]
}

and the template file is

vpc:
  subnets:
    private:
      ${__az0__}:
        id: ${__sb0__}
      ${__az1__}:
        id: ${__sb1__}
cloudWatch:
  clusterLogging:
    enableTypes:
      [ "audit", "authenticator", "controllerManager", "scheduler", "api" ]

iam:
  serviceRoleARN: ${__cluster_role_arn__}

nodeGroups:
  - name: ${__spot_big_ng_name__}
    minSize: 0
    maxSize: 10
    desiredCapacity: 1
    privateNetworking: true
    instancesDistribution:
      instanceTypes: [ "t2.xlarge" ]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
      spotAllocationStrategy: "capacity-optimized"
    labels:
      lifecycle: Ec2Spot
      nodegroup-role: worker
      node-role.spot-worker: "true"
    tags:
      ${__project_tags__}
      k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot
      k8s.io/cluster-autoscaler/node-template/label/intent: apps
    iam:
      instanceRoleARN: ${__node_group_role_arn__}
      instanceProfileARN: ${__node_group_role_profile_arn__}

  - name: ${__spot_small_ng_name__}
    minSize: 0
    maxSize: 10
    desiredCapacity: 1
    privateNetworking: true
    instancesDistribution:
      instanceTypes: [ "t2.xlarge" ]
      onDemandBaseCapacity: 0
      onDemandPercentageAboveBaseCapacity: 0
      spotAllocationStrategy: "capacity-optimized"
    labels:
      lifecycle: Ec2Spot
      nodegroup-role: worker
      node-role.spot-worker: "true"
    tags:
      ${__project_tags__}
      k8s.io/cluster-autoscaler/node-template/label/lifecycle: Ec2Spot
      k8s.io/cluster-autoscaler/node-template/label/intent: apps
    iam:
      instanceRoleARN: ${__node_group_role_arn__}
      instanceProfileARN: ${__node_group_role_profile_arn__}

managedNodeGroups:
  - name: ${__managed_addon_ng_name__}
    instanceType: t2.xlarge
    privateNetworking: true
    minSize: 2
    desiredCapacity: 2
    maxSize: 10
    volumeSize: 50
    labels:
      role: add-ons-platform
    tags:
      ${__project_tags__}
      nodegroup-role: worker
      lifecycle: OnDemand
    iam:
      instanceRoleARN: ${__node_group_role_arn__}

  - name: ${__managed_worker_ng_name__}
    instanceType: t2.xlarge
    privateNetworking: true
    minSize: 2
    desiredCapacity: 2
    maxSize: 10
    volumeSize: 50
    labels:
      role: worker
      node-role.worker: "true"
    tags:
      ${__project_tags__}
      nodegroup-role: worker
      lifecycle: OnDemand
      k8s.io/cluster-autoscaler/node-template/taint/onDemandInstance: "true:PreferNoSchedule"
    iam:
      instanceRoleARN: ${__node_group_role_arn__}

  - name: ${__managed_addon_monitoring_ng_name__}
    instanceType: t2.xlarge
    privateNetworking: true
    minSize: 2
    desiredCapacity: 2
    maxSize: 5
    volumeSize: 50
    labels:
      role: add-ons-monitoring
    tags:
      ${__project_tags__}
      nodegroup-role: worker
      lifecycle: OnDemand
    iam:
      instanceRoleARN: ${__node_group_role_arn__}

When I change the minSize/desiredCapacity/maxSize and tun terraform apply nothing changes on AWS state. But terraform plan shows the following

          -     minSize: 2
          -     desiredCapacity: 2
          +     minSize: 3
          +     desiredCapacity: 3

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17 (10 by maintainers)

Commits related to this issue

Most upvoted comments

Closing as fixed via #48. Thanks, everyone!