autoscaler: Cluster Autoscaler scaling from 0 on AWS EKS stops working after CA pod is restarted

Which component are you using?: cluster-autoscaler

What version of the component are you using?:

Component version: 1.18.3

What k8s version are you using (kubectl version)?: 1.18

kubectl version Output
$ kubectl version

What environment is this in?:

Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-14T14:49:35Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.9-eks-d1db3c", GitCommit:"d1db3c46e55f95d6a7d3e5578689371318f95ff9", GitTreeState:"clean", BuildDate:"2020-10-20T22:18:07Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}

What did you expect to happen?: I have a cluster-autoscaler deployed on AWS EKS. It is scaling three non-managed nodegroups from 0 and is running smoothly. However after a period of time (which seems to be somewhere between 14 and 20 days) the autoscaler seems to “lose” visibility of some of the nodegroups and starts failing to find a place to schedule the pods.

Please note that this behavior has been consistent over the past three months at least (over different versions of the austocaler on different versions of kubernetes - I have tried this also on kuberentes 1.17 with cluster-autoscaler version 1.17.4 and 1.17.3).

Also please note that no modifications have been made to the pod spec or the nodes that I have been using (the pods get deployed as part of a scheduled job).

To resolve this issue - I have to manually scale the nodegroup to a number of nodes that is non-zero and different from the “desired capacity” - the nodegroups then become visible again to the autoscaler and it resumes functioning properly.

Please see the following logs of when the autoscaler fails to fit a pod

I1221 18:43:28.403351       1 filter_out_schedulable.go:60] Filtering out schedulables
I1221 18:43:28.403424       1 filter_out_schedulable.go:116] 0 pods marked as unschedulable can be scheduled.
I1221 18:43:28.403455       1 filter_out_schedulable.go:77] No schedulable pods
I1221 18:43:28.403469       1 scale_up.go:322] Pod prefect/prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6 is unschedulable
I1221 18:43:28.403522       1 scale_up.go:360] Upcoming 0 nodes
I1221 18:43:28.403622       1 scale_up.go:284] Pod prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6 can't be scheduled on eks-54bb0f80-0061-9aef-9229-269f75a7df33, predicate checking error: node(s) didn't match node selector; predicateName=NodeAffinity; reasons: node(s) didn't match node selector; debugInfo=
I1221 18:43:28.403644       1 scale_up.go:433] No pod can fit to eks-54bb0f80-0061-9aef-9229-269f75a7df33
I1221 18:43:28.403700       1 scale_up.go:284] Pod prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6 can't be scheduled on eks-eabb0f7f-f1a7-83f0-3ec3-3f040fa0b843, predicate checking error: Insufficient cpu, Insufficient memory; predicateName=NodeResourcesFit; reasons: Insufficient cpu, Insufficient memory; debugInfo=
I1221 18:43:28.403732       1 scale_up.go:433] No pod can fit to eks-eabb0f7f-f1a7-83f0-3ec3-3f040fa0b843
I1221 18:43:28.403802       1 scale_up.go:284] Pod prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6 can't be scheduled on eksctl-production-datascience-nodegroup-prefect-m5-8xlarge-NodeGroup-I7NS8B2JZGKP, predicate checking error: node(s) didn't match node selector; predicateName=NodeAffinity; reasons: node(s) didn't match node selector; debugInfo=
I1221 18:43:28.403822       1 scale_up.go:433] No pod can fit to eksctl-production-datascience-nodegroup-prefect-m5-8xlarge-NodeGroup-I7NS8B2JZGKP
I1221 18:43:28.403883       1 scale_up.go:284] Pod prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6 can't be scheduled on eksctl-production-datascience-nodegroup-prefect-m5-xlarge-NodeGroup-YY863OMC4245, predicate checking error: node(s) didn't match node selector; predicateName=NodeAffinity; reasons: node(s) didn't match node selector; debugInfo=
I1221 18:43:28.403902       1 scale_up.go:433] No pod can fit to eksctl-production-datascience-nodegroup-prefect-m5-xlarge-NodeGroup-YY863OMC4245
I1221 18:43:28.403961       1 scale_up.go:284] Pod prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6 can't be scheduled on eksctl-production-datascience-nodegroup-prefect-r5-xlarge-NodeGroup-1IPGCCDQ4SVRT, predicate checking error: node(s) didn't match node selector; predicateName=NodeAffinity; reasons: node(s) didn't match node selector; debugInfo=
I1221 18:43:28.403982       1 scale_up.go:433] No pod can fit to eksctl-production-datascience-nodegroup-prefect-r5-xlarge-NodeGroup-1IPGCCDQ4SVRT
I1221 18:43:28.403999       1 scale_up.go:437] No expansion options
I1221 18:43:28.404049       1 static_autoscaler.go:436] Calculating unneeded nodes
I1221 18:43:28.404069       1 pre_filtering_processor.go:66] Skipping ip-192-168-24-13.us-west-2.compute.internal - node group min size reached
I1221 18:43:28.404085       1 pre_filtering_processor.go:66] Skipping ip-192-168-122-181.us-west-2.compute.internal - node group min size reached
I1221 18:43:28.404116       1 scale_down.go:421] Node ip-192-168-24-12.us-west-2.compute.internal - cpu utilization 0.653061
I1221 18:43:28.404129       1 scale_down.go:424] Node ip-192-168-24-12.us-west-2.compute.internal is not suitable for removal - cpu utilization too big (0.653061)
I1221 18:43:28.404142       1 scale_down.go:487] Scale-down calculation: ignoring 1 nodes unremovable in the last 5m0s
I1221 18:43:28.404135       1 event.go:278] Event(v1.ObjectReference{Kind:"Pod", Namespace:"prefect", Name:"prefect-dask-job-3b7e73fc-1750-44b6-8802-167512cf1681-295x6", UID:"adfc2034-0975-4435-9620-bdefc1ab1c5e", APIVersion:"v1", ResourceVersion:"11981668", FieldPath:""}): type: 'Normal' reason: 'NotTriggerScaleUp' pod didn't trigger scale-up (it wouldn't fit if a new node is added): 4 node(s) didn't match node selector, 1 Insufficient cpu, 1 Insufficient memory
I1221 18:43:28.404167       1 static_autoscaler.go:490] Scale down status: unneededOnly=false lastScaleUpTime=2020-12-18 21:08:17.497046574 +0000 UTC m=+21.122273595 lastScaleDownDeleteTime=2020-12-19 14:26:28.596956979 +0000 UTC m=+62312.222184059 lastScaleDownFailTime=2020-12-18 21:08:17.497046728 +0000 UTC m=+21.122273750 scaleDownForbidden=false isDeleteInProgress=false scaleDownInCooldown=false
I1221 18:43:28.404212       1 static_autoscaler.go:503] Starting scale down
I1221 18:43:28.404251       1 scale_down.go:867] No candidates for scale down
I1221 18:43:38.417418       1 static_autoscaler.go:226] Starting main loop
W1221 18:43:38.418114       1 aws_manager.go:315] Found multiple availability zones for ASG "eksctl-production-datascience-nodegroup-prefect-m5-8xlarge-NodeGroup-I7NS8B2JZGKP"; using us-west-2a
W1221 18:43:38.456626       1 aws_manager.go:315] Found multiple availability zones for ASG "eksctl-production-datascience-nodegroup-prefect-r5-xlarge-NodeGroup-1IPGCCDQ4SVRT"; using us-west-2a
I1221 18:43:38.513330       1 filter_out_schedulable.go:60] Filtering out schedulables
I1221 18:43:38.513352       1 filter_out_schedulable.go:116] 0 pods marked as unschedulable can be scheduled.
I1221 18:43:38.513361       1 filter_out_schedulable.go:77] No schedulable pods
I1221 18:43:38.513380       1 static_autoscaler.go:389] No unschedulable pods

Then after my manual intervention in scaling the nodegroup it works fine: some sample logs

I1221 18:53:15.740135       1 cluster.go:148] Fast evaluation: ip-192-168-124-127.us-west-2.compute.internal for removal
I1221 18:53:15.740152       1 cluster.go:168] Fast evaluation: node ip-192-168-124-127.us-west-2.compute.internal cannot be removed: pod annotated as not safe to evict present: prefect-dask-job-ee62a87c-f5ef-4ba7-b2e4-a745b5a37ba9-dgljg
I1221 18:53:15.740169       1 scale_down.go:591] 1 nodes found to be unremovable in simulation, will re-check them at 2020-12-21 18:58:15.688680268 +0000 UTC m=+251419.313907325
I1221 18:53:15.740231       1 static_autoscaler.go:479] ip-192-168-62-233.us-west-2.compute.internal is unneeded since 2020-12-21 18:48:32.64088555 +0000 UTC m=+250836.266112601 duration 4m43.047794724s
I1221 18:53:15.740261       1 static_autoscaler.go:490] Scale down status: unneededOnly=false lastScaleUpTime=2020-12-18 21:08:17.497046574 +0000 UTC m=+21.122273595 lastScaleDownDeleteTime=2020-12-19 14:26:28.596956979 +0000 UTC m=+62312.222184059 lastScaleDownFailTime=2020-12-18 21:08:17.497046728 +0000 UTC m=+21.122273750 scaleDownForbidden=false isDeleteInProgress=false scaleDownInCooldown=false
I1221 18:53:15.740280       1 static_autoscaler.go:503] Starting scale down
I1221 18:53:15.740348       1 scale_down.go:790] ip-192-168-62-233.us-west-2.compute.internal was unneeded for 4m43.047794724s
I1221 18:53:15.740378       1 scale_down.go:867] No candidates for scale down
I1221 18:53:15.740423       1 delete.go:193] Releasing taint {Key:DeletionCandidateOfClusterAutoscaler Value:1608576512 Effect:PreferNoSchedule TimeAdded:<nil>} on node ip-192-168-124-127.us-west-2.compute.internal
I1221 18:53:15.751685       1 delete.go:220] Successfully released DeletionCandidateTaint on node ip-192-168-124-127.us-west-2.compute.internal
I1221 18:53:25.764228       1 static_autoscaler.go:226] Starting main loop
I1221 18:53:25.894081       1 auto_scaling_groups.go:351] Regenerating instance to ASG map for ASGs: [eks-54bb0f80-0061-9aef-9229-269f75a7df33 eks-eabb0f7f-f1a7-83f0-3ec3-3f040fa0b843 eksctl-production-datascience-nodegroup-prefect-m5-8xlarge-NodeGroup-I7NS8B2JZGKP eksctl-production-datascience-nodegroup-prefect-m5-xlarge-NodeGroup-YY863OMC4245 eksctl-production-datascience-nodegroup-prefect-r5-xlarge-NodeGroup-1IPGCCDQ4SVRT]
I1221 18:53:26.006657       1 auto_scaling.go:199] 5 launch configurations already in cache
I1221 18:53:26.006951       1 aws_manager.go:269] Refreshed ASG list, next refresh after 2020-12-21 18:54:26.006946035 +0000 UTC m=+251189.632173088
W1221 18:53:26.007852       1 aws_manager.go:315] Found multiple availability zones for ASG "eksctl-production-datascience-nodegroup-prefect-r5-xlarge-NodeGroup-1IPGCCDQ4SVRT"; using us-west-2a
I1221 18:53:26.044308       1 filter_out_schedulable.go:60] Filtering out schedulables
I1221 18:53:26.044327       1 filter_out_schedulable.go:116] 0 pods marked as unschedulable can be scheduled.
I1221 18:53:26.044335       1 filter_out_schedulable.go:77] No schedulable pods
I1221 18:53:26.044355       1 static_autoscaler.go:389] No unschedulable pods
I1221 18:53:26.044368       1 static_autoscaler.go:436] Calculating unneeded nodes
I1221 18:53:26.044382       1 pre_filtering_processor.go:66] Skipping ip-192-168-122-181.us-west-2.compute.internal - node group min size reached
I1221 18:53:26.044399       1 pre_filtering_processor.go:66] Skipping ip-192-168-24-13.us-west-2.compute.internal - node group min size reached
I1221 1

I am not sure how to help reproduce this issue without waiting on the cluster-autoscaler to fail - I am wondering if someone else might have faced this

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 39
  • Comments: 34 (1 by maintainers)

Most upvoted comments

So I think I figured out the issue in my case and it was due to using what I think is referred to as a predicate as a node affinity label to match on i.e. I was using the alpha.eksctl.io/nodegroup-name:

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: alpha.eksctl.io/nodegroup-name
            operator: In
            values:
            - my-nodegroup

and then I changed to using a custom node label and now the auto-scaler is robust to scaling from 0 after restarts

  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: my-label
            operator: In
            values:
            - my-nodegroup

the updated cluster config file now looks like

- availabilityZones:
  - us-west-2a
  desiredCapacity: 1
  iam:
    withAddonPolicies:
      autoScaler: true
      ebs: true
  instanceType: m5.8xlarge
  maxSize: 100
  minSize: 0
  name: my-nodegroup
  volumeSize: 100
  labels:
    my-label: "my-nodegroup"
  tags:
    k8s.io/cluster-autoscaler/node-template/label/my-label: "my-nodegroup"

Hope this helps

I seem to hit similar issue when using EBS CSI driver, while stateless deployments can scale fine from 0 also after restarting CA. So I don’t know what we are doing differently in regards to that part. Using CA 1.20.0.

Here is working scale up (before restarting CA):

I0712 17:14:02.459471       1 static_autoscaler.go:229] Starting main loop
I0712 17:14:02.462369       1 filter_out_schedulable.go:65] Filtering out schedulables
I0712 17:14:02.462453       1 filter_out_schedulable.go:132] Filtered out 0 pods using hints
I0712 17:14:02.462564       1 scheduler_binder.go:795] PersistentVolume "pvc-e9b59588-ece6-4149-9a2c-37535de4215d", Node "ip-10-30-101-8.ec2.internal" mismatch for Pod "godoc/godoc-0": no matching NodeSelectorTerms
I0712 17:14:02.462620       1 filter_out_schedulable.go:170] 0 pods were kept as unschedulable based on caching
I0712 17:14:02.462647       1 filter_out_schedulable.go:171] 0 pods marked as unschedulable can be scheduled.
I0712 17:14:02.462677       1 filter_out_schedulable.go:82] No schedulable pods
I0712 17:14:02.462710       1 klogx.go:86] Pod godoc/godoc-0 is unschedulable
I0712 17:14:02.462773       1 scale_up.go:364] Upcoming 0 nodes
I0712 17:14:02.462855       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-32bd4d51-1bff-6422-d9b1-5285154769e5, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 17:14:02.462915       1 scale_up.go:437] No pod can fit to eks-32bd4d51-1bff-6422-d9b1-5285154769e5
I0712 17:14:02.463021       1 scheduler_binder.go:775] Could not get a CSINode object for the node "template-node-for-eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8-6579891967708768566": csinode.storage.k8s.io "template-node-for-eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8-6579891967708768566" not found
I0712 17:14:02.463076       1 scheduler_binder.go:801] All bound volumes for Pod "godoc/godoc-0" match with Node "template-node-for-eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8-6579891967708768566"
I0712 17:14:02.464413       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-66bd4d51-1c5e-3129-797f-c01bc6cbc269, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 17:14:02.464560       1 scale_up.go:437] No pod can fit to eks-66bd4d51-1c5e-3129-797f-c01bc6cbc269
I0712 17:14:02.465044       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-d4bd4d51-1bfc-37c3-ec88-e7af9b8f3ae2, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 17:14:02.465142       1 scale_up.go:437] No pod can fit to eks-d4bd4d51-1bfc-37c3-ec88-e7af9b8f3ae2
I0712 17:14:02.465575       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-d8bd4d51-1bf9-ba76-c57f-1acad6704b09, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 17:14:02.465650       1 scale_up.go:437] No pod can fit to eks-d8bd4d51-1bf9-ba76-c57f-1acad6704b09
I0712 17:14:02.465790       1 scheduler_binder.go:775] Could not get a CSINode object for the node "template-node-for-eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5-8119258595974552162": csinode.storage.k8s.io "template-node-for-eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5-8119258595974552162" not found
I0712 17:14:02.465871       1 scheduler_binder.go:795] PersistentVolume "pvc-e9b59588-ece6-4149-9a2c-37535de4215d", Node "template-node-for-eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5-8119258595974552162" mismatch for Pod "godoc/godoc-0": no matching NodeSelectorTerms
I0712 17:14:02.465942       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5, predicate checking error: node(s) had volume node affinity conflict; predicateName=VolumeBinding; reasons: node(s) had volume node affinity conflict; debugInfo=
I0712 17:14:02.466007       1 scale_up.go:437] No pod can fit to eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5
I0712 17:14:02.466770       1 priority.go:118] Successfully loaded priority configuration from configmap.
I0712 17:14:02.466851       1 priority.go:166] priority expander: eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8 chosen as the highest available
I0712 17:14:02.467137       1 scale_up.go:456] Best option to resize: eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8
I0712 17:14:02.467215       1 scale_up.go:460] Estimated 1 nodes needed in eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8
I0712 17:14:02.467654       1 scale_up.go:574] Final scale-up plan: [{eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8 0->1 (max: 10)}]
I0712 17:14:02.467719       1 scale_up.go:663] Scale-up: setting group eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8 size to 1
I0712 17:14:02.467844       1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"cluster-autoscaler-status", UID:"d9799aff-9f5a-4af9-a35e-2f7a1388a2e8", APIVersion:"v1", ResourceVersion:"53409654", FieldPath:""}): type: 'Normal' reason: 'ScaledUpGroup' Scale-up: setting group eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8 size to 1
I0712 17:14:02.468714       1 auto_scaling_groups.go:219] Setting asg eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8 size to 1
I0712 17:14:02.659496       1 eventing_scale_up_processor.go:47] Skipping event processing for unschedulable pods since there is a ScaleUp attempt this loop
I0712 17:14:02.659689       1 event_sink_logging_wrapper.go:48] Event(v1.ObjectReference{Kind:"Pod", Namespace:"godoc", Name:"godoc-0", UID:"db3b9a21-bce2-4f57-82b0-1443ea5e7be7", APIVersion:"v1", ResourceVersion:"53409662", FieldPath:""}): type: 'Normal' reason: 'TriggeredScaleUp' pod triggered scale-up: [{eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8 0->1 (max: 10)}]

And after CA restart:

I0712 16:58:43.980356       1 static_autoscaler.go:229] Starting main loop
I0712 16:58:43.983440       1 filter_out_schedulable.go:65] Filtering out schedulables
I0712 16:58:43.983474       1 filter_out_schedulable.go:132] Filtered out 0 pods using hints
I0712 16:58:43.984013       1 scheduler_binder.go:795] PersistentVolume "pvc-e9b59588-ece6-4149-9a2c-37535de4215d", Node "ip-10-30-101-8.ec2.internal" mismatch for Pod "godoc/godoc-0": no matching NodeSelectorTerms
I0712 16:58:43.984061       1 filter_out_schedulable.go:170] 0 pods were kept as unschedulable based on caching
I0712 16:58:43.984355       1 filter_out_schedulable.go:171] 0 pods marked as unschedulable can be scheduled.
I0712 16:58:43.984665       1 filter_out_schedulable.go:82] No schedulable pods
I0712 16:58:43.985094       1 klogx.go:86] Pod godoc/godoc-0 is unschedulable
I0712 16:58:43.985549       1 scale_up.go:364] Upcoming 0 nodes
I0712 16:58:43.986417       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-32bd4d51-1bff-6422-d9b1-5285154769e5, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 16:58:43.986507       1 scale_up.go:437] No pod can fit to eks-32bd4d51-1bff-6422-d9b1-5285154769e5
I0712 16:58:43.986640       1 scheduler_binder.go:775] Could not get a CSINode object for the node "template-node-for-eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8-7288025506317767450": csinode.storage.k8s.io "template-node-for-eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8-7288025506317767450" not found
I0712 16:58:43.986716       1 scheduler_binder.go:795] PersistentVolume "pvc-e9b59588-ece6-4149-9a2c-37535de4215d", Node "template-node-for-eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8-7288025506317767450" mismatch for Pod "godoc/godoc-0": no matching NodeSelectorTerms
I0712 16:58:43.986786       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8, predicate checking error: node(s) had volume node affinity conflict; predicateName=VolumeBinding; reasons: node(s) had volume node affinity conflict; debugInfo=
I0712 16:58:43.986852       1 scale_up.go:437] No pod can fit to eks-5abd4d6f-b827-fb3f-f6c4-0feeb58a49f8
I0712 16:58:43.988550       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-66bd4d51-1c5e-3129-797f-c01bc6cbc269, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 16:58:43.988647       1 scale_up.go:437] No pod can fit to eks-66bd4d51-1c5e-3129-797f-c01bc6cbc269
I0712 16:58:43.988755       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-d4bd4d51-1bfc-37c3-ec88-e7af9b8f3ae2, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 16:58:43.988821       1 scale_up.go:437] No pod can fit to eks-d4bd4d51-1bfc-37c3-ec88-e7af9b8f3ae2
I0712 16:58:43.988940       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-d8bd4d51-1bf9-ba76-c57f-1acad6704b09, predicate checking error: node(s) didn't match Pod's node affinity; predicateName=NodeAffinity; reasons: node(s) didn't match Pod's node affinity; debugInfo=
I0712 16:58:43.989015       1 scale_up.go:437] No pod can fit to eks-d8bd4d51-1bf9-ba76-c57f-1acad6704b09
I0712 16:58:43.989162       1 scheduler_binder.go:775] Could not get a CSINode object for the node "template-node-for-eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5-7333524577874715059": csinode.storage.k8s.io "template-node-for-eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5-7333524577874715059" not found
I0712 16:58:43.989239       1 scheduler_binder.go:795] PersistentVolume "pvc-e9b59588-ece6-4149-9a2c-37535de4215d", Node "template-node-for-eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5-7333524577874715059" mismatch for Pod "godoc/godoc-0": no matching NodeSelectorTerms
I0712 16:58:43.989310       1 scale_up.go:288] Pod godoc-0 can't be scheduled on eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5, predicate checking error: node(s) had volume node affinity conflict; predicateName=VolumeBinding; reasons: node(s) had volume node affinity conflict; debugInfo=
I0712 16:58:43.989378       1 scale_up.go:437] No pod can fit to eks-e4bd4d6f-b7ac-ac76-03a6-8608708f9ed5

I can confirm that after setting the right tags to the ASG’s everything works perfectly even if the autoscaler restarts.

Same here. Manually setting “desired = 1” in ASG group fixes this until next CA pod restart.

@anandnilkal - I just tried a tainted/tagged nodegroup. Scaling from 0 worked then I forced a restart of the cluster-autoscaler pod (by deleting the pod) and scaling from 0 stopped working - so the same issue remains.

do you follow all the things as mentioned here - How can I scale a node group to 0?

@marwan116 i will test this scenario and update in here.

@anandnilkal - can you please provide an update that tainting and tagging the nodegroup is still working with scaling from 0 ? (the reason I ask is this issue doesn’t immediately present itself so it is hard to debug …)

When I was trying this out, I didn’t have any taints on the node group so I didn’t think I need to add any tags - maybe it is this edge case of untainted/untagged nodegroups that fails when scaling from 0?

@sowhatim thank you for responding, I was using multiple ASG (auto-scaling group)s that span multiple AZ(availability zone)s - I fixed it so that all ASGs only span the same AZ … so will report back if I stop encountering this issue within the next month to close this issue