rancher: Namespace created by rancher can't delete

Rancher versions: rancher/rancher:2.06

Infrastructure Stack versions: kubernetes (if applicable):

Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T15:22:13Z", GoVersion:"go1.9.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5+coreos.0", GitCommit:"0d082e389e1f4311dc5d225eb77f9688c50d340a", GitTreeState:"clean", BuildDate:"2018-03-21T21:10:44Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB) single node rancher in k8s Environment Template: (Cattle/Kubernetes/Swarm/Mesos) Kubernetes

Results:

~$ kubectl  get ns
NAME                    STATUS        AGE
all-developer           Active        11d
cattle-system           Terminating   3d
default                 Active        11d
istio-system            Active        10d
kube-public             Active        11d
kube-system             Active        11d
rancher                 Active        3d
rancher-cattle-system   Terminating   3d
test                    Terminating   4d

Namespace created by rancher can’t delete, is stucking in Terminating. How to fix it?

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 12
  • Comments: 53 (3 by maintainers)

Most upvoted comments

This is a known issue with removing an imported cluster (and in the process of being fixed) but you can remove it by running kubectl edit namespace cattle-system and remove the finalizer called controller.cattle.io/namespace-auth then save. Kubernetes won’t delete an object that has a finalizer on it.

It can’t help me, any other solution?

And I find finalizers , remove 、save it . But still can not delete the namespace. And the finalizer still in the config

spec: finalizers:

  • kubernetes

Alternative way:

kubectl patch namespace cattle-system -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-system --grace-period=0 --force

kubectl patch namespace cattle-global-data -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system
kubectl delete namespace cattle-global-data --grace-period=0 --force

kubectl patch namespace local -p '{"metadata":{"finalizers":[]}}' --type='merge' -n cattle-system

for resource in `kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get -o name -n local`; do kubectl patch $resource -p '{"metadata": {"finalizers": []}}' --type='merge' -n local; done

kubectl delete namespace local --grace-period=0 --force

When i try to edit the namespace and remove finalizer. After I save the file doesn’t seem to accept changes even though the system tells me it does. When I re-open the edit again the finalizer lines are back.

These steps work for me.

Step One: kubectl get ns namespace-name -o json > tmp.json

Step two vim tmp.son remove finalizers save

Step three curl -k -H “Content-Type: application/json” -H “Authorization: Bearer <token found in .kube/config” -X PUT --data-binary @tmp.json https://IP/k8s/clusters/clusterID/api/v1/namespaces/namespace/finalize

@kodo651 first check which resource under the namespace has stuck using

kubectl api-resources --verbs=list --namespaced -o name   | xargs -n 1 kubectl get --show-kind --ignore-not-found  -n <namespace>

second edit every resource’s finalizer to []
then everything goes well

kubectl get ns | awk '{print $1}' | grep -v NAME | xargs -I{} kubectl patch namespace {}  -p '{"metadata":{"finalizers":[]}}' --type='merge' -n {}

I used this to remove the finalizers on all the namespaces

kubectl get customresourcedefinitions | grep cattle.io | awk '{print $1}' | xargs -I{} kubectl patch crd {} -p '{"metadata": {"finalizers": []}}' --type='merge'
kubectl get customresourcedefinitions | grep cattle.io | awk '{print $1}' | xargs kubectl delete crd

and this to delete the crds

then I was able to delete the offending namespaces

@NeckBeardPrince may be they assume no one uninstalls Rancher once installed. 😅

Why is this closed when the problem still exists?

Running Rancher v2.1.0 , It’s work for me:

kubectl edit -n cattle-system secret tls-rancher delete finalizers kubectl delete -n cattle-system secret tls-rancher

kubectl get customresourcedefinitions |grep management.cattle.io

kubectl edit customresourcedefinitions *.management.cattle.io delete finalizers

kubectl get customresourcedefinitions |grep cattle.io |awk ‘{print $1}’ |xargs kubectl delete customresourcedefinitions

Just ran into this today. Why is this closed?

I have been run

kubectl delete namespace cattle-system --force --grace-period=0

But useless

Hi,

running into the same issue with 2.1.3 + calico networking.

apiVersion: v1
kind: Namespace
metadata:
  annotations:
    cattle.io/status: '{"Conditions":[{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2018-12-19T14:00:24Z"},{"Type":"ResourceQuotaValidated","Status":"Unknown","Message":"Validating
      resource quota","LastUpdateTime":""},{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2018-12-19T14:00:19Z"}]}'
    field.cattle.io/creatorId: u-zbw65mdtej
    field.cattle.io/projectId: c-x8rzf:p-pr7zr
    field.cattle.io/resourceQuota: "null"
  creationTimestamp: "2018-12-19T14:00:17Z"
  deletionTimestamp: "2018-12-19T14:02:05Z"
  labels:
    cattle.io/creator: norman
    field.cattle.io/projectId: p-pr7zr
  name: t
  resourceVersion: "5872"
  selfLink: /api/v1/namespaces/t
  uid: 6a975c42-0396-11e9-bd3b-aaaaaaaaaa4a
spec:
  finalizers:
  - kubernetes
status:
  phase: Terminating

Running Rancher v2.5.8 , It’s work for me: kubectl patch namespace cattle-system -p ‘{“metadata”:{“finalizers”:[]}}’ --type=‘merge’ -n cattle-system kubectl delete namespace cattle-system --grace-period=0 --force

Basically there will be a lot of custom resources with finalizer in their metadata that prevent them from being deleted. For this reason, even when you remove the finalizer from all the rancher namespaces, some still stuck at terminating state. Anyway, I follow below steps to clean up all the rancher resources from my cluster:

  • First, edit all the Rancher namespace and remove finalizer from their metadata. Then you run kubectl delete namespace to delete those namespace. Let is stuck at terminating state.

  • Open a new terminal. Delete all the custom resources definition (CRD) by running below command, let the command stuck too:

kubectl get customresourcedefinitions |grep management.cattle.io | awk '{print $1}' |xargs kubectl delete customresourcedefinitions


customresourcedefinition.apiextensions.k8s.io "clusters.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "globalrolebindings.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "globalroles.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "kontainerdrivers.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "nodedrivers.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "podsecuritypolicytemplates.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "roletemplates.management.cattle.io" deleted
customresourcedefinition.apiextensions.k8s.io "users.management.cattle.io" deleted

  • See the list of CRDs above? Those in the form *.management.cattle.io. Open a new terminal and run below command, replace users.management.cattle.io by each of the CRD above.
for resource in `kc get users.management.cattle.io -o name`; do kubectl patch $resource -p '{"metadata": {"finalizers": []}}' --type='merge' --all-namespaces; done

You should see the command in step 1 & 2 finish running, all resources are now deleted.

Hello - I am also facing this issue. I understand the workaround provided above, but it looks like this issue is closed. Has any work been done to track down or fix the root cause?

We’ve so far been able to completely automate the creation and removal of K8s clusters and onboarding them into Rancher, but when the namespace termination gets stuck, the process grinds to a halt and requires manual intervention. Thanks!

Rancher versions: rancher/rancher:2.1.5

Infrastructure Stack versions: Kubernetes v1.11.5

Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB): HA Rancher in K8s

Environment Template: Kubernetes

Results:

image

Since I would like to delete namespaces out of a Junit Test with plain Java code over fabric8 library, all these solutions don’t work. Why is this Issue closed?

I manually deleted the finalizers too. This article helped me out

https://medium.com/@devang.j05/kubernetes-delete-a-terminating-namespace-16a61c0aa9da

I don’t know why this issue is closed. I came across this issue when cleaning up our staging cluster, which our developers use a lot. We use a cluster with nodes provisioned through rancher at Digitalocean. For the other people ending here after googling this issue and looking for an easy way to remove these namespaces, i will leave the shell script I’ve written for these cases here, please use it with care:

# This script deletes namespaces created through rancher with dangling finalizers
namespace=undefined

# path to your kube config file, e.g.: ~/.kube/config
kubeconfig=
# URL of your rancher, e.g.: https://rancher.example.com
rancher_url=
# ID of the cluster, will be found in the URL of the cluster start page: https://rancher.example.com/c/<CLUSTER_ID>/monitoring
cluster_id=

# Your Rancher Bearer Token generated at 'APIs & Keys' in Rancher
RANCHER_BEARER=

# Ask which namespace will be delete
echo "Enter Namespace you want to delete:"
read namespace

echo "Get Namespace $namespace"
kubectl --kubeconfig $kubeconfig get ns $namespace -o json > $namespace.json

# Removes the whole "Spec" block of the namespace
echo "Removing spec block"
sed -i -e '/\"spec\"/,/}/ d; /^$/d' $namespace.json

# Push namespace back, will be deleted immediately if already dangling
echo "Send edited json file back to rancher"
curl -k -H "Content-Type: application/json" -H "Authorization: Bearer $RANCHER_BEARER" -X PUT --data-binary @$namespace.json $rancher_url/k8s/clusters/$cluster_id/api/v1/namespaces/$namespace/finalize```

I tried all methods I found but failed, but this works for me! 😃 https://stackoverflow.com/questions/52369247/namespace-stuck-as-terminating-how-i-removed-it

Here is a copy:

(
NAMESPACE=your-rogue-namespace
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
)

For deleting all rancher resources i used This scriptLink , Note it will delete the permissions model so you will have to re assing the name spaces to projects. For using it i also had to replace ‘\r’ by nothing (it looks like someone download the script and uolaod it trho windows) i did it trho python

I used these scripts to cleanup and verify the cleanup. However, all of my namespaces are still infected with the controller.cattle.io/namespace-auth finalizer.

For deleting all rancher resources i used This scriptLink , Note it will delete the permissions model so you will have to re assing the name spaces to projects. For using it i also had to replace ‘\r’ by nothing (it looks like someone download the script and uolaod it trho windows) i did it trho python

Please open this issue. All methods to resolve did not help and the ns is not deleted.

The error I’m facing is:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {

  },
  "status": "Failure",
  "message": "Operation cannot be fulfilled on namespaces \"linkerd-flux\": StorageError: invalid object, Code: 4, Key: /registry/namespaces/linkerd-flux, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 85bb607b-a10c-4957-ae5c-8613de0ede8d, UID in object meta: ",
  "reason": "Conflict",
  "details": {
    "name": "linkerd-flux",
    "kind": "namespaces"
  },
  "code": 409
}

In my same situation, the problem was unavailability of “v1beta1.metrics.k8s.io” apiservice used by service “kube-system/metrics-server” which is seemingly enabled by default by rancher but the service itself needs to be deployed separately. You can delete the apiservice if not needed with “kubectl delete apiservice v1beta1.metrics.k8s.io”. When you delete the apiservice kubernetes remove the namespaces automatically.

I’m seeing the same thing as @Allen-yan. Several namespaces stuck in “Removing” with a kubernetes finalizer in the spec. Cannot seem to remove the finalizer or force the removal.

Running Rancher v2.1.0 here