kubernetes: There is no way to force delete Namespaces with invalid finalizers
What happened:
There is no way to delete Namespaces stuck in Terminating if they have finalizers which will never be run.
What you expected to happen:
There is a way to delete Namespaces stuck in Terminating.
How to reproduce it (as minimally and precisely as possible):
Begin with any Kubernetes cluster.
kubectl apply -fthis YAML
apiVersion: v1
kind: Namespace
metadata:
name: delete-me
spec:
finalizers:
- foregroundDeletion
-
kubectl delete ns delete-me -
It is not possible to delete
delete-me.
The only workaround I’ve found is to destroy and recreate the entire cluster.
Anything else we need to know?:
Stack Overflow question with everything I’ve tried.
Environment:
- Kubernetes version (use
kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.8-gke.6", GitCommit:"394ee507d00f15a63cef577a14026096c310698e", GitTreeState:"clean", BuildDate:"2019-03-30T19:31:43Z", GoVersion:"go1.10.8b4", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: GKE
- OS (e.g:
cat /etc/os-release):
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux buster/sid"
NAME="Debian GNU/Linux"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
- Kernel (e.g.
uname -a):
Linux [hostname] 4.19.28-2rodete1-amd64 #1 SMP Debian 4.19.28-2rodete1 (2019-03-18 > 2018) x86_64 GNU/Linux
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 34 (16 by maintainers)
sorry, PUT.
You essentially have to mimic what a controller that was sending PUT requests to the finalize subresource would do to drop those spec.finalizers.
Create with a spec.finalizer:
Request deletion:
PUT the namespace without finalizers to the /finalize subresource (
kubectl proxyjust lets you easily use curl against the subresource):And now the delete can succeed.
thanks @liggitt , that worked well
One liner (two lines if you are picky) for @liggitt 's solution which avoids creating intermediate yaml file:
Starting
kubectl proxyin one terminal:In another terminal:
so please use the other tips to delete the finalizer 😃
What other tips? I have tried:
kubectl edit ns delete-mekubectl proxy &andcurlto runfinalizekubectl delete ns delete-me --force --grace-period=0etcdctl --endpoint=http://127.0.0.1:8001 rm /namespaces/delete-me./kill-kube-ns delete-meI recently found this method works every time allowing you to identify the reason for the supposedly stuck state.
I believe this results in properly deleting whatever resource is stuck on terminating
If you
kubectl editand manually remove the finalizer, either it will be deleted or you’ll be shown a probably useful error message.If that makes it get deleted then either the GC has a bug or it’s stuck trying to delete some other resource.
Thanks @lavalamp , but that doesn’t work.
Things I’ve tried:
None of these work or modify the Namespace. After any of these the problematic finalizer still exists.
Edit the YAML
Apply:
The command finishes with no error, but the Namespace is not udpated.
The below YAML has the same result:
kubectl editkubectl edit ns delete-me, and remove the finalizer. Ditto removing the list entirely. Ditto removingspec.This shows no error message but does not update the Namespace.
kubectl proxy &kubectl proxy &curl -k -H "Content-Type: application/yaml" -X PUT --data-binary @tmp.yaml http://127.0.0.1:8001/api/v1/namespaces/delete-me/finalizeAs above, this exits successfully but does nothing.
Force Delete
kubectl delete ns delete-me --force --grace-period=0This actually results in an error:
However, it doesn’t actually do anything.
Wait a long time
In the test cluster I set up to debug this issue, I’ve been waiting over a week. Even if the Namespace might eventually decide to be deleted, I need it to be deleted faster than a week.
Make sure the Namespace is empty
The Namespace is empty.
etcdctlI’m pretty sure that’s an error, but I have no idea how to interpret that. It also doesn’t work. Also tried with
--dirand-r.ctron/kill-kube-nsThere is a script for force deleting Namespaces. This also does not work.
@souravb11 Thanks this is a simple and easy solution to fix the issue edited namespace and removed below “finalizers”: [ “controller.cattle.io/namespace-auth” ]
I faced the same issue and i have edit my ns yaml file and remove “finalizers” parameter from file and saved. Thats work for me its deleted immediate.
THIS SOLUTION is the best! GREAT, thanks. I just use the command to find all resources installed, then edit all resources related to Rancher to have none ‘finalizers’. After that you could edit rancher´s namespaces to have none ‘finalizers’, then delete all.
kubectl api-resources --verbs=list --namespaced -o name | while read line; do echo $line; kubectl get -A $line --ignore-not-found -o name ;done;