kubernetes: Extended api server blocks deletion of namespace

What happened?

We have deployed a k8s extended apiserver in, say namespace “kubedb”. Later we try to k delete ns kubedb. The namespace stays in terminating stage. If I check the status.conditions of the namespace, I see

  - lastTransitionTime: "2022-03-01T16:21:45Z"
    message: 'Discovery failed for some groups, 9 failing: unable to retrieve the
      complete list of server APIs: mutators.autoscaling.kubedb.com/v1alpha1: the
      server is currently unable to handle the request, mutators.dashboard.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, mutators.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, mutators.ops.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, mutators.schema.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, validators.dashboard.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, validators.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, validators.ops.kubedb.com/v1alpha1:
      the server is currently unable to handle the request, validators.schema.kubedb.com/v1alpha1:
      the server is currently unable to handle the request'
    reason: DiscoveryFailed

The problem seems to be that K8s is waiting to delete resources provided by the EAS. But the EAS pods are already gone (garbage collected). So, the namespace deletion is stuck.

The solution seems to manually delete the apiservices by hand. Then I see that the namespace goes away.

What did you expect to happen?

This seems like an edge case that k8s garbage collector should automatically handle. It should be possible to watch apiservices and when the namespace used by an apiservice.spec.service is deleted, that apiservice should be automatically deleted by the namespace finalizer.

How can we reproduce it (as minimally and precisely as possible)?

Just take any EAS like the k8s.io/sample-apiserver, deploy it in k8s and then delete the namespace where it is deployed.

Anything else we need to know?

I have tried making the namespace owner of the apiservices. But it does not help unless namespace is deleted in the forground. k delete ns kubedb --cascade=foreground .

This is also bad UX. We are deploying our EAS using helm. There is no easy way to set this owner reference.

Kubernetes version

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:30:48Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-26T08:10:51Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/arm64"}

Cloud provider

any

OS version

# On Linux:
$ cat /etc/os-release
# paste output here
$ uname -a
# paste output here

# On Windows:
C:\> wmic os get Caption, Version, BuildNumber, OSArchitecture
# paste output here

Install tools

Container runtime (CRI) and and version (if applicable)

Related plugins (CNI, CSI, …) and versions (if applicable)

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 15 (11 by maintainers)

Most upvoted comments

You can delete the apiservices, mutatingwebhookconfiguration & validatingwebhookconfiguration using kubectl delete. If you need further help, please email us directly. I don’t want to litter this community thread with vendor specific stuff.

It might be helpful to log an explanatory Event when someone tries to delete a namespace that holds an APIService’s Service (that will then block / have an impact on removal of the namespace).