kubernetes: delete namespace stuck on Terminating status

Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened: delete namespace stuck on status Terminating

What you expected to happen: delete namespace

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?: @kubernetes/sig-API Machinery-bugs

[root@node1 tmp]# kubectl get namespace NAME STATUS AGE default Active 2d istio-system Active 2d kube-public Active 2d kube-system Active 2d rook Terminating 7h

[root@node1 tmp]# curl -H “Content-Type: application/json” -X PGET --data-binary @ns.json http://135.121.241.75:8001/api/v1/namespaces/rook/finalize -v

  • About to connect() to 135.121.241.75 port 8001 (#0)
  • Trying 135.121.241.75…
  • Connected to 135.121.241.75 (135.121.241.75) port 8001 (#0)

PGET /api/v1/namespaces/rook/finalize HTTP/1.1 User-Agent: curl/7.29.0 Host: 135.121.241.75:8001 Accept: / Content-Type: application/json Content-Length: 479

  • upload completely sent off: 479 out of 479 bytes < HTTP/1.1 405 Method Not Allowed < Content-Length: 245 < Content-Type: application/json < Date: Thu, 17 May 2018 21:56:44 GMT < { “kind”: “Status”, “apiVersion”: “v1”, “metadata”: {

    }, “status”: “Failure”, “message”: “the server does not allow this method on the requested resource”, “reason”: “MethodNotAllowed”, “details”: {

    }, “code”: 405

  • Connection #0 to host 135.121.241.75 left intact }[root@node1 tmp]#

[root@node1 tmp]# kubectl get namespace rook -o json { “apiVersion”: “v1”, “kind”: “Namespace”, “metadata”: { “creationTimestamp”: “2018-05-17T14:13:51Z”, “deletionTimestamp”: “2018-05-17T20:07:51Z”, “name”: “rook”, “resourceVersion”: “305082”, “selfLink”: “/api/v1/namespaces/rook”, “uid”: “8687bbb1-59dc-11e8-95d0-8cdcd4b71238” }, “spec”: { “finalizers”: [ “kubernetes” ] }, “status”: { “phase”: “Terminating” } } [root@node1 tmp]#

Environment:

  • Kubernetes version (use kubectl version):kubectl version Client Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.5”, GitCommit:“f01a2bf98249a4db383560443a59bed0c13575df”, GitTreeState:“clean”, BuildDate:“2018-03-19T15:50:45Z”, GoVersion:“go1.9.3”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.5”, GitCommit:“f01a2bf98249a4db383560443a59bed0c13575df”, GitTreeState:“clean”, BuildDate:“2018-03-19T15:50:45Z”, GoVersion:“go1.9.3”, Compiler:“gc”, Platform:“linux/amd64”} [root@node1 tmp]#

  • Cloud provider or hardware configuration:

  • OS (e.g. from /etc/os-release): more /etc/os-release NAME=“Red Hat Enterprise Linux Server” VERSION=“7.3 (Maipo)” ID=“rhel” ID_LIKE=“fedora” VERSION_ID=“7.3” PRETTY_NAME=“Red Hat Enterprise Linux” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:redhat:enterprise_linux:7.3:GA:server” HOME_URL=“https://www.redhat.com/” BUG_REPORT_URL=“https://bugzilla.redhat.com/

REDHAT_BUGZILLA_PRODUCT=“Red Hat Enterprise Linux 7” REDHAT_BUGZILLA_PRODUCT_VERSION=7.3 REDHAT_SUPPORT_PRODUCT=“Red Hat Enterprise Linux” REDHAT_SUPPORT_PRODUCT_VERSION=“7.3”

  • Kernel (e.g. uname -a): Linux node1 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: kubespray

  • Others: helm/rook/istio

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 32 (7 by maintainers)

Most upvoted comments

I was able to solve this same problem (or an apparently similar one) by patching a crd that turned out to be the cause of the problem.

First I identified the offending crd with the command $ kubectl get crd NAME CREATED AT bgpconfigurations.crd.projectcalico.org 2018-10-24T14:06:47Z clusterinformations.crd.projectcalico.org 2018-10-24T14:06:47Z clusters.rook.io 2019-02-05T08:36:08Z felixconfigurations.crd.projectcalico.org 2018-10-24T14:06:47Z globalnetworkpolicies.crd.projectcalico.org 2018-10-24T14:06:47Z globalnetworksets.crd.projectcalico.org 2018-10-24T14:06:47Z hostendpoints.crd.projectcalico.org 2018-10-24T14:06:47Z ippools.crd.projectcalico.org 2018-10-24T14:06:47Z networkpolicies.crd.projectcalico.org 2018-10-24T14:06:47Z

Then I issued the following command: $ kubectl patch crd clusters.rook.io -p ‘{“metadata”:{“finalizers”: []}}’ --type=merge

At this point, the crd and the namespace, that I had previously, unsuccessfully, tried to delete, were automatically removed.

I am also not able to delete namespace, even after editing kubectl edit ns <my-namespace>. When I run edit command again, kubernetes value under finalizers appers again.

same problem with istio 1.0.2

FYI, this document explained well how to manually delete stuck namespace. https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/troubleshoot/ns_terminating.html

I faced the same issue and had to remove the resources being used by the namespace manually from my openstack enviornment. From kubectl, didn’t find any way to cleanup namespace mess…

I know the issue is closed, but I’m guessing random people might get here with the same problems.

Our situation was similar to the one reported by sporkmonger, but caused by a faulty metrics server instalation instead of certmanager:

E0418 19:34:02.835197       1 memcache.go:135] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request

After removing the metricserver apiservice namespace eventually went away.

It’s not a k8s issue, rook ceph project use this mechanism in order to save the cluster state.

for cleaning the cluster, follow there article about cleaning up a cluster: https://rook.io/docs/rook/v0.8/ceph-teardown.html

Anyway, If you still struggle with the rook namespace deletion,

Check what resources is stuck in rook namespace. kubectl --namespace=<rook-namespace> get all ,

then read carefully how to deal with deletion of StatefulSet Pods (DON’T USE THIS IN PRODUCTION Unless you are really know the side effects of those commands) https://kubernetes.io/docs/tasks/run-application/force-delete-stateful-set-pod/#delete-pods

Follow the steps in the article for deleting the terminated pod with in the rook namespace. kubectl delete pods <pod> --grace-period=0 --force after deleting the terminated pod in rook namespace, run: kubectl edit ns <rook-namespace> delete the - kubernetes value under finalizers section and save the file.

after those steps rook namespace will be deleted.

How do we find what’s blocking the deletion?