kubernetes: Deployment with recreate strategy does not remove old replica set

I want to replace an application by running a new deployment. The image tag however does not change, although the image itself might have (that’s the reason for the image pull policy always). For that reason I’ve added an extra label that changes on every deployment.


---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: gocd-agent-${AGENT_NAME}
spec:
  replicas: ${AGENT_REPLICAS}
  strategy:
    type: Recreate
  revisionHistoryLimit: 25
  selector:
    matchLabels:
      app: gocd-agent-${AGENT_NAME}
      release: v${VERSION}
  template:
    metadata:
      labels:
        app: gocd-agent-${AGENT_NAME}
        release: v${VERSION}
    spec:
      containers:
      - name: gocd-agent
        image: ${AGENT_IMAGE}
        imagePullPolicy: Always
        volumeMounts:
        - name: ssh-keys
          mountPath: /var/go/.ssh
        - name: gcloud-keys
          mountPath: /var/go/.gcloud
        - name: docker-sock
          mountPath: /var/run/docker.sock
        - name: docker-bin
          mountPath: /usr/bin/docker
        env:
        - name: "GO_SERVER"
          value: "gocd-server"
        - name: "AGENT_KEY"
          value: "something"
        - name: "AGENT_RESOURCES"
          value: "${AGENT_RESOURCES}"
        - name: "DOCKER_GID_ON_HOST"
          value: "107"
        resources:
          limits:
            cpu: 1000m
            memory: 1024Mi
          requests:
            cpu: 10m
            memory: 512Mi
      volumes:
      - name: ssh-keys
        secret:
          secretName: ssh-keys
      - name: gcloud-keys
        secret:
          secretName: gcloud-keys
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock
      - name: docker-bin
        hostPath:
          path: /usr/bin/docker

When applying this the deployment is accepted and a new replica set is created. However the old replica set stays alive as well.

I apply it with the following command in order to replace the variables. Take note of how the version is set to the current date and time.

AGENT_NAME=$AGENT_NAME AGENT_REPLICAS=$AGENT_REPLICAS AGENT_IMAGE=$AGENT_IMAGE AGENT_RESOURCES=$AGENT_RESOURCES VERSION=$(date +"%Y%m%d%H%M") sed -r 's/^(.*)(\$\{[A-Z_]+\})/echo "\1\2"/e' ./manifests/gocd-agent-deployment.yaml | kubectl apply -f - --record

The deployment description shows labels and selectors to have a mismatch.

Name:                   gocd-agent-jdk8
Namespace:              apex
CreationTimestamp:      Thu, 14 Apr 2016 21:07:45 +0200
Labels:                 app=gocd-agent-jdk8,release=v201604142107
Selector:               app=gocd-agent-jdk8,release=v201604151548
Replicas:               1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           Recreate
MinReadySeconds:        0
OldReplicaSets:         <none>
NewReplicaSet:          gocd-agent-jdk8-70801407 (1/1 replicas created)
Events:
  FirstSeen     LastSeen        Count   From                            SubobjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  12m           12m             1       {deployment-controller }                        Normal          ScalingReplicaSet       Scaled up replica set gocd-agent-jdk8-70801407 to 1

And the rollout history acts like the second deployment never happened.

REVISION        CHANGE-CAUSE
1               kubectl apply -f - --record

Am I understanding the strategy Recreate incorrectly? Or is this a bug?

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 1
  • Comments: 16 (6 by maintainers)

Most upvoted comments

Thought this might help anyone looking at how to remove inactive replicasets (spec.replicas set to zero by the deployment). You may want to add something to filter for replicasets actually managed by a deployment as this might end up destroying manually created replicasets:

    kubectl get --all-namespaces rs -o json|jq -r '.items[] | select(.spec.replicas | contains(0)) | "kubectl delete rs --namespace=\(.metadata.namespace) \(.metadata.name)"'

This will not delete anything, it will simply output a list of kubectl commands required to perform the cleanup.

@JorritSalverda all old RSes will be kept by default unless you specify the deployment’s .spec.revisionHistoryLimit.

You don’t need to run any script in order to cleanup old replica sets - just set .spec.revisionHistoryLimit in the Deployment to the number of old replica sets you want to retain.

http://kubernetes.io/docs/user-guide/deployments/#clean-up-policy

I think @JorritSalverda specifically means that old RS’ are kept running, instead of downscaled once a new version is deployed.

This seems to be “working as intended”, as you change the selector, and thus the old pods no longer match to this RS, and can’t be downscaled.

From what I understand, a RS is “updated” when its “hash” differs, ie: when anything in the config is changed. So, instead of updating the .spec.selector labels, you could change a .metadata.labels label, as those labels don’t influence which pods are attached to the RS.

I am guessing it is because you are using different selectors?

@JorritSalverda all old RSes will be kept by default unless you specify the deployment’s .spec.revisionHistoryLimit.

it helpd me ,thanks you very much!!

If you find this surprising please complain at #23597.

Yep, I’m doing that but wrote that script to clean up the ones that had already been created before. On Thu, 8 Dec 2016 at 16:09, Michail Kargakis notifications@github.com wrote:

You don’t need to run any script in order to cleanup old replica sets - just set .spec.revisionHistoryLimit in the Deployment to the number of old replica sets you want to retain.

http://kubernetes.io/docs/user-guide/deployments/#clean-up-policy

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubernetes/issues/24330#issuecomment-265778930, or mute the thread https://github.com/notifications/unsubscribe-auth/AAMuep2OFhU-ucfG78BGPR9qGYuxknd_ks5rGCujgaJpZM4IIWvA .