kubernetes: Cannot delete pods associated with a former deployment

I am in a state where I have PODs which are being automatically replaced even though there is no RC or Deployment associated with them.

I am using Kubernetes 1.2.0

[marcol@kube-master deployments]$ kubectl get rc
[marcol@kube-master deployments]$ kubectl get deployment
[marcol@kube-master deployments]$ kubectl get pods
NAME                                            READY     STATUS    RESTARTS   AGE
xyz-deployment-3326926642-5iyvw   0/1       Running   3          9m
xyz-deployment-3326926642-d30ja   0/1       Running   5          10m
xyz-deployment-347725808-91yg0    0/1       Running   4          10m
xyz-deployment-347725808-xxk24    0/1       Running   5          10m

[marcol@kube-master deployments]$ kubectl delete pod xyz-deployment-3326926642-5iyvw
[marcol@kube-master deployments]$ kubectl get pods
NAME                                            READY     STATUS              RESTARTS   AGE
xyz-deployment-3326926642-4vaq5   0/1       ContainerCreating   0          4s
xyz-deployment-3326926642-5iyvw   0/1       Terminating         5          12m
xyz-deployment-3326926642-d30ja   0/1       CrashLoopBackOff    6          14m
xyz-deployment-347725808-91yg0    0/1       Running             6          14m
xyz-deployment-347725808-xxk24    0/1       Running             7          14m

The sequence that caused the issue from me was:

1- Create deployment xyz-deployment 2- Replace deployment with an updated version of xyz-deployment 3- Delete deployment before its pods are considered ready

My deployment definition is the following:

apiVersion: extensions/v1beta1
kind: Deployment
metadata: {name: xyz-deployment}
spec:
  replicas: 2
  template:
    metadata:
      labels: {app: xyz, build: '12343242323'}
    spec:
      containers:
      - name: xyz
        image: 10.239.32.81:5000/xyz:0.2.1-SNAPSHOT
        imagePullPolicy: Always
        ports:
        - containerPort: 20040
        - containerPort: 25701
        readinessProbe:
          httpGet:
            path: /manage/health
            port: 20040
          initialDelaySeconds: 60
          timeoutSeconds: 3
        livenessProbe:
          httpGet:
            path: /manage/health
            port: 20040
          initialDelaySeconds: 60
          timeoutSeconds: 3

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 19 (3 by maintainers)

Most upvoted comments

kubectl delete deployment xyz-deployment

What I can add is that immediately before performing this action I had just issued a couple of kubectl update/replace that triggered rolling updates. My readinessCheck waits for 60 seconds so none of the spawned containers were marked as ready yet.

Maybe you should take a look at kubectl get statefulsets or kubectl get jobs.

Yes deployment creates a ReplicaSet and not a ReplicationController. ReplicaSet is like ReplicationController - main difference being LabelSelector. You can view replica sets using kubectl get rs. More details : http://kubernetes.io/docs/user-guide/replicasets/

This is what might have happened to you: When you deleted the deployment, you didnt delete the corresponding replicaset and hence the pod was being recreated.

What is the command you used to delete your deployment?

Yes, you should be dealing with deployments.

kubectl delete deployment should delete the corresponding replica set and pods as well. Please file a bug if you have steps to reproduce the problem.

I also faced the issue, I have used below command to delete deployment. kubectl delete deployments DEPLOYMENT_NAME but still pods was recreating, so I just deleted the docker images from CR but still faced the same issue. So I crossed check the Replica Set by using below command kubectl get rs then finally edit the replicaset to 1 to 0 kubectl edit rs REPICASET_NAME

kubectl get rs

hai thnks, this solution make my problem solved

Yes we keep the previous rs around in case you want to rollback to it, but it should be scaled down to 0 and when you delete the deployment, both the rs’s should be deleted. If that is not happening, then that is a bug. Please file another issue with exact steps to repro. Please include the YAML file which can be used to repro and your kubectl and cluster version.

Sorry I deleted the post and posted it to #26375 thought it might be more relevant there. But let me repost it here.

Hi,

I got the same issue here.

kubectl get deployment,svc,pods,pvc,rc,rs ruby-2.1.5
NAME             CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.19.240.1   <none>        443/TCP   20h

NAME                             READY     STATUS        RESTARTS   AGE
po/cache-3839532833-4n8nn        0/1       Unknown       0          1d
po/cache-3839532833-m09vj        0/1       Unknown       2          2d
po/cache-3839532833-qzp26        1/1       Terminating   6          20h
po/db-441726071-2r3dr            0/1       Unknown       3          2d
po/db-441726071-jcp11            0/1       Unknown       0          1d
po/db-441726071-khjhz            1/1       Terminating   6          20h
po/nginx-1104661453-nzvbb        0/1       Unknown       2          2d
po/nginx-1104661453-phdlp        1/1       Terminating   7          20h
po/nginx-1104661453-qljdh        0/1       Unknown       0          1d
po/php-2747015390-thg50          1/1       Terminating   6          20h
po/php-2820743390-lxm1w          0/1       Unknown       0          1d
po/php-2820743390-ngx5v          0/1       Unknown       0          2d
po/swagger-ui-2304101130-103f1   0/1       Unknown       0          2d

NAME             STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
pvc/php-claim0   Pending                                                     6d

so even after I ran

kubectl delete pod --all                  ruby-2.1.5
pod "cache-3839532833-4n8nn" deleted
pod "cache-3839532833-m09vj" deleted
pod "cache-3839532833-qzp26" deleted
pod "db-441726071-2r3dr" deleted
pod "db-441726071-jcp11" deleted
pod "db-441726071-khjhz" deleted
pod "nginx-1104661453-nzvbb" deleted
pod "nginx-1104661453-phdlp" deleted
pod "nginx-1104661453-qljdh" deleted
pod "php-2747015390-thg50" deleted
pod "php-2820743390-lxm1w" deleted
pod "php-2820743390-ngx5v" deleted
pod "swagger-ui-2304101130-103f1" deleted

The pods still there and with weird status. Now I am stuck of not sure how to fix this…