kubernetes: Cannot delete pods associated with a former deployment
I am in a state where I have PODs which are being automatically replaced even though there is no RC or Deployment associated with them.
I am using Kubernetes 1.2.0
[marcol@kube-master deployments]$ kubectl get rc
[marcol@kube-master deployments]$ kubectl get deployment
[marcol@kube-master deployments]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
xyz-deployment-3326926642-5iyvw 0/1 Running 3 9m
xyz-deployment-3326926642-d30ja 0/1 Running 5 10m
xyz-deployment-347725808-91yg0 0/1 Running 4 10m
xyz-deployment-347725808-xxk24 0/1 Running 5 10m
[marcol@kube-master deployments]$ kubectl delete pod xyz-deployment-3326926642-5iyvw
[marcol@kube-master deployments]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
xyz-deployment-3326926642-4vaq5 0/1 ContainerCreating 0 4s
xyz-deployment-3326926642-5iyvw 0/1 Terminating 5 12m
xyz-deployment-3326926642-d30ja 0/1 CrashLoopBackOff 6 14m
xyz-deployment-347725808-91yg0 0/1 Running 6 14m
xyz-deployment-347725808-xxk24 0/1 Running 7 14m
The sequence that caused the issue from me was:
1- Create deployment xyz-deployment 2- Replace deployment with an updated version of xyz-deployment 3- Delete deployment before its pods are considered ready
My deployment definition is the following:
apiVersion: extensions/v1beta1
kind: Deployment
metadata: {name: xyz-deployment}
spec:
replicas: 2
template:
metadata:
labels: {app: xyz, build: '12343242323'}
spec:
containers:
- name: xyz
image: 10.239.32.81:5000/xyz:0.2.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 20040
- containerPort: 25701
readinessProbe:
httpGet:
path: /manage/health
port: 20040
initialDelaySeconds: 60
timeoutSeconds: 3
livenessProbe:
httpGet:
path: /manage/health
port: 20040
initialDelaySeconds: 60
timeoutSeconds: 3
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 19 (3 by maintainers)
kubectl delete deployment xyz-deploymentWhat I can add is that immediately before performing this action I had just issued a couple of
kubectl update/replacethat triggered rolling updates. MyreadinessCheckwaits for 60 seconds so none of the spawned containers were marked as ready yet.Maybe you should take a look at
kubectl get statefulsetsorkubectl get jobs.Yes deployment creates a ReplicaSet and not a ReplicationController. ReplicaSet is like ReplicationController - main difference being LabelSelector. You can view replica sets using
kubectl get rs. More details : http://kubernetes.io/docs/user-guide/replicasets/This is what might have happened to you: When you deleted the deployment, you didnt delete the corresponding replicaset and hence the pod was being recreated.
What is the command you used to delete your deployment?
Yes, you should be dealing with deployments.
kubectl delete deployment should delete the corresponding replica set and pods as well. Please file a bug if you have steps to reproduce the problem.
I also faced the issue, I have used below command to delete deployment. kubectl delete deployments DEPLOYMENT_NAME but still pods was recreating, so I just deleted the docker images from CR but still faced the same issue. So I crossed check the Replica Set by using below command kubectl get rs then finally edit the replicaset to 1 to 0 kubectl edit rs REPICASET_NAME
hai thnks, this solution make my problem solved
Yes we keep the previous rs around in case you want to rollback to it, but it should be scaled down to 0 and when you delete the deployment, both the rs’s should be deleted. If that is not happening, then that is a bug. Please file another issue with exact steps to repro. Please include the YAML file which can be used to repro and your kubectl and cluster version.
Sorry I deleted the post and posted it to #26375 thought it might be more relevant there. But let me repost it here.
Hi,
I got the same issue here.
so even after I ran
The pods still there and with weird status. Now I am stuck of not sure how to fix this…