keda: RollingUpdate strategy is not respected with regards to maxUnavailable

Report

My deployment’s RollingUpdate strategy is not being respected with regards to its maxUnavailable: 0 configuration, when updating deployments attached to a ScaledObject. This is not true when using plain HorizontalPodAutoscaler objects.

Expected Behavior

When setting up a Deployment using a RollingUpdate strategy with maxUnavailable: 0, I expect to never have less running & available pods than what is currently desired when updating my deployment (be it a pod’s configuration, rolling out new image versions…). This is true when using k8s “plain” HorizontalPodAutoscaler object.

Actual Behavior

Having a Deployment setup with a RollingUpdate strategy with maxUnavailable: 0, when attaching a ScaledObject to such Deployment, the strategy is not respected and whenever we perform a RollingUpdate, we lose all but one Pod. HPA then takes care of scaling up to whatever the desired replicas is.

Steps to Reproduce the Problem

  1. Have a k8s Deployment with a RollingUpdate strategy defined, e.g:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.1
        ports:
        - containerPort: 80
  1. Attach a ScaledObject to it, for example a CPU Scaler:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cpu-scaledobject
  namespace: default
spec:
  minReplicaCount: 3
  maxReplicaCount: 10
  scaleTargetRef:
    name: nginx
  triggers:
  - type: cpu
    metadata:
      type: Utilization
      value: "30"
  1. Perform an update of such Deployment, for example let’s change its image version:
- name: nginx
  image: nginx:1.14.2
  1. We expect to always have at least the HPA’s current replicas desired number of running & available Pods when performing such update. What’s currently experienced is that we lose all but 1 Pod when updating.

Logs from KEDA operator

No response

KEDA Version

2.6.1

Kubernetes Version

1.19

Platform

Amazon Web Services

Scaler Details

Prometheus

Anything else?

No response

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 19 (11 by maintainers)

Most upvoted comments

No, HPA is not recreated. In fact, you can check the age of the HPA and double check that it counter doesn’t restart during release process