autoscaler: Pending pod triggers new node instead of evict a pod with lower priority

Hi all, Yesterday we started to test cluster-autoscaler with priorityClasses and podPriority to always have some available extra capacity in our cluster, however whenever a new pod comes up and is in pending state, this one triggers a new node with cluster-autoscaler instead of replacing any pod from the “paused” deployment running with lower priorityClass.

This is configuration I added to my cluster:

    authorizationMode: RBAC
    authorizationRbacSuperUser: admin
    runtimeConfig:
      scheduling.k8s.io/v1alpha1: "true"
      admissionregistration.k8s.io/v1beta1: "true"
      autoscaling/v2beta1: "true"

kubelet:
    featureGates:
      PodPriority: "true"
 
 
kubeAPIServer:
    featureGates:
      PodPriority: "true"

kubeAPIServer:
   admissionControl:
   - Priority
   - NamespaceLifecycle
    - LimitRanger
    - ServiceAccount
    - PersistentVolumeLabel
    - DefaultStorageClass
    - ResourceQuota
    - DefaultTolerationSeconds

kubeControllerManager:
    horizontalPodAutoscalerDownscaleDelay: 1h0m0s
    horizontalPodAutoscalerUseRestClients: true

As I could see in my masters, these features seems to be enabled: --feature-gates=PodPriority=true --enable-admission-plugins=Priority,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota,DefaultTolerationSeconds,ValidatingAdmissionWebhook,NodeRestriction,ResourceQuota

Is there anything else that I’m missing in my config? The overscaling deployment is the same one you can find in cluster-autoscaler FAQ.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 23 (4 by maintainers)

Commits related to this issue

Most upvoted comments

I found out how to make it works. There are a couple of parameters more to set than what cluster-autoscaler describes. This is the right configuration for this:

kubeAPIServer:
  runtimeConfig:
      scheduling.k8s.io/v1alpha1: "true"
      admissionregistration.k8s.io/v1beta1: "true"
      autoscaling/v2beta1: "true"
  admissionControl:
  - Priority
  featureGates:
    PodPriority: "true"
kubelet:
  featureGates:
     PodPriority: "true"
kubeScheduler:
  featureGates:
    PodPriority: "true"
kubeControllerManager:
  horizontalPodAutoscalerUseRestClients: true
  featureGates:
     PodPriority: "true"

That will enable podPriority and Preemption in your cluster.

Thank you for helping me!

You’re welcome. There is Helm chart available to overprovision the cluster and create a default priority class here: https://github.com/helm/charts/tree/master/stable/cluster-overprovisioner

Give it a try!

Updated. Thanks @aarongorka

@aarongorka thanks for catching that. I just updated my comment as well.