helm: Helm 'delete' doesn't delete PVCs

When Helm installs a chart including a statefulset which uses volumeClaimTemplates to generate new PVCs for each replica created, Helm loses control on those PVCs.

As a consequence, once the chart is removed (both using helm delete my-release and helm delete --purge my-release) every PVC created is left on the cluster.

Helm should take care of those resources to maintain the cluster “clean”.

Output of helm version: 2.12.1 (latest)

Output of kubectl version: 1.12.4

Cloud Provider/Platform (AKS, GKE, Minikube etc.): All of them

About this issue

  • Original URL
  • State: open
  • Created 5 years ago
  • Reactions: 52
  • Comments: 23 (5 by maintainers)

Most upvoted comments

Just a funny consideration - I accidentally ran helm uninstall today on a wrong kubernetes context and this issue being NOT fixed SAVED MY LIFE 😃 I was able to run helm install, and all the stateful services were back, no data loss! So, please, until uninstall behavior remains as it is now - do not fix this!

would be nice to just honor PV’s reclaim policy 😃

Personally I think that the helm behaviour should be in line with the corresponding kubernetes behaviour in this regard. If you define PVCs and PVs by kubectl apply ... and then run kubectl delete ... the previousely defined PVCs and PVs get deleted again. In comparison to that, if you run helm install ... and then helm del --purge ... the previousely defined PVCs and PVs stay intact. Although these commands are not comparable one to one, I would opt for also having an intuitive behaviour available with helm.

While I don’t think it should be deleted by default by a helm delete --purge, there should definitely exist some flag to clean up generated kube objects like those PVCs.

Is there any good plan for this question now? Why isn’t anyone answering?

would be nice to just honor PV’s reclaim policy 😃

Hi. This is my workaround that I use in helmfile:

 # Remove pvc with reclaim policy == delete.
  - events: ["postuninstall"]
    command: "bash"
    showlogs: true
    args: ["-c", "kubectl -n {{`{{.Release.Namespace}}`}} get pvc -l app.kubernetes.io/instance={{`{{.Release.Name}}`}} -o jsonpath='{.items..spec.volumeName}' | xargs -n1 -d ' ' -I{} -r kubectl get pv {} -o json | jq -r 'select(.spec.persistentVolumeReclaimPolicy==\"Delete\")|.spec.claimRef.name' | xargs -I{} -r kubectl -n {{`{{.Release.Namespace}}`}} delete pvc {}"]

You can also use below example and wrap it as a bash function:

kubectl -n mariadb get pvc -l app.kubernetes.io/instance=common-galera -o jsonpath='{.items..spec.volumeName}' | KUBECONFIG=/home/dev/.kube/path-to-your-config xargs -n1 -d ' ' -I{} -r kubectl get pv {} -o json | jq -r 'select (.spec.persistentVolumeReclaimPolicy=="Delete")|.spec.claimRef.name'| KUBECONFIG=/home/dev/.kube/path-to-your-config xargs -I{} -r kubectl -n mariadb delete pvc {}

The KUBECONFIG is optional.

Is there any good plan for this question now? Why isn’t anyone answering?

What do you propose as a solution? 😃

At this point it appears there’s consensus here that Helm should follow the same behaviour as kubectl here. If someone wants to write a PR against Helm 3 to remove the PVC on helm delete, we’d appreciate it.

Users that rely on the PVC staying after a helm delete can use the helm.sh/resource-policy: keep annotation as we recommend for other resources users wish to retain.

Please feel free to work on something. I have way too much on my plate at the moment to look at this. But we’d welcome contributions to look at for Helm 4.

At this point we cannot change the behaviour here until Helm 4. Tagging it as a proposal we can look at for Helm 4.

@bacongobbler - Not sure if you are working on this project, but do we have any solution for this?

I am installing the Prometheus and attaching the PV with a few pods and I want helm delete to delete the PVs along with the chart. Currently, the volume is not getting deleted

Kinda the reason why we implemented it that way. ^

Interesting contrast @RobertoDonPedro, would the syntax--purge-pvs=true and --purge-pvcs=true be a good solution for now?

Agreed. PVCs is one of those resources that don’t get cleaned up by helm delete --purge, and I believe that’s been the intended behaviour since v2.0.0 in order to protect that data in case an operator needed to purge and re-install. A feature flag to delete all of the resources managed by a helm release could be useful in certain cases.