kubernetes: Deleting a deployment does not delete replicaset
The following is a standard deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{name}}
namespace: {{namespace}}
spec:
replicas: 1
template:
metadata:
labels:
name: {{name}}
name: {{name}}
spec:
containers:
- name: redis
image: jkosgei/redis
env:
- name: PASS
value: {{ auth_password }}
imagePullPolicy: Always
ports:
- name: redis
containerPort: "6379"
volumeMounts:
- name: gluster
mountPath: /data
subPath: {{app_id}}/app
volumes:
- name: gluster
persistentVolumeClaim:
claimName: "storage-{{name}}-{{namespace}}"
According to the docs at http://kubernetes.io/docs/user-guide/deployments/ , creating the above with create a replicaset with a redis pod under it. In this sense creating the above deployment will create a replicaset. However deleting the above with , via api will not delete the replicaset created. The pods will stay up and continue running.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 15 (6 by maintainers)
+1. Looking forward to seeing this issue being addressed soon.
No, you can’t. The name isn’t random, the deployment controller appends the hash of the pod template used by a replica set to the deployment name and creates the replica set name. You could pass a unique annotation in the deployment, that annotation gets inherited by the children replica sets, then GET all replica sets in the namespace, filter out all but those with that annotation, and DELETE.
I believe we will get to fix replica set garbage collection soon-ish. I hope:)
If you use the kubectl delete it will work due to the implicit --cascade flag. The API doesn’t support it yet and there is some work ongoing to garbage collect it @kubernetes/deployment @kargakis