kubernetes: kubectl delete should wait for resource to be deleted before returning
As per https://github.com/kubernetes/kubernetes/pull/40714#issuecomment-278460041, kubectl delete
should ensure that the resource is deleted before returning by default. We can also add a --wait-for-deletion
flag that users can set if they dont want to wait.
Work items:
- Update apiserver to return the UID of the resource being deleted in response to a delete request (https://github.com/kubernetes/kubernetes/pull/45600)
- Update kubectl delete code to:
- First send DELETE request to apiserver with the resource name. Server will return back resource UID as part of the response.
- Then keep sending DELETE requests to apiserver with UID precondition until server returns 404 or 409 or we timeout.
- Skip the wait if user sets
--wait-for-deletion=false
. - Revert wait for deletion code added in https://github.com/kubernetes/kubernetes/pull/42674
cc @liggitt @smarterclayton @bgrant0607 @kubernetes/sig-cli-bugs
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 27
- Comments: 27 (16 by maintainers)
Commits related to this issue
- Merge pull request #42674 from nikhiljindal/secretKubeTe Automatic merge from submit-queue Enable secrets in federation kubectl tests Fixes https://github.com/kubernetes/kubernetes/issues/40568 Su... — committed to kubernetes/kubernetes by deleted user 7 years ago
- Merge pull request #45600 from nikhiljindal/waitForDel Automatic merge from submit-queue (batch tested with PRs 41331, 45591, 45600, 45176, 45658) Updating generic registry to return UID of the dele... — committed to kubernetes/kubernetes by deleted user 7 years ago
- Merge pull request #46798 from nikhiljindal/servicesReaper Automatic merge from submit-queue Deleting kubectl.ServiceReaper since there is no special service deletion logic Ref https://github.com/k... — committed to kubernetes/kubernetes by deleted user 7 years ago
- Merge pull request #45600 from nikhiljindal/waitForDel Automatic merge from submit-queue (batch tested with PRs 41331, 45591, 45600, 45176, 45658) Updating generic registry to return UID of the dele... — committed to kubernetes-retired/cluster-registry by deleted user 7 years ago
- Avoid flake unit tests After deleting all the resources in the test namespace, check if some DaemonSet is still in place. This check is useful because the k8s delete APIs are asynchronous. See https... — committed to zeeke/metallb-operator by zeeke 2 years ago
- Avoid flake unit tests After deleting all the resources in the test namespace, check if some DaemonSet is still in place. This check is useful because the k8s delete APIs are asynchronous. See https... — committed to zeeke/metallb-operator by zeeke 2 years ago
- Merge pull request #45600 from nikhiljindal/waitForDel Automatic merge from submit-queue (batch tested with PRs 41331, 45591, 45600, 45176, 45658) Updating generic registry to return UID of the dele... — committed to akhilerm/apimachinery by k8s-publishing-bot 7 years ago
Guys, it’s not a good idea to break current behaviour. By default delete command was not waiting until everything will be complete. I have a lot of scripts that drop a lot of resources. Now they are working for a very long time. Wait feature is good, but default behaviour change is upsetting
Yikes - was wondering why delete seems to now hang. Now I know! IMO a bad choice for the default behaviour 😦
Why was this made default? I cannot find any rationale for this here. IMO, one should have very good reason to break current behaviour. Besides, while I agree that this is a good addition, there are still plenty of use cases for not waiting.
@roffe sure, i found how to achieve this, thank you. My message was about breaking default behaviour.
This is driving me up the wall. I may get to this before too long.
What is actually going on under the covers? It appears that the command immediately prints “<object> deleted” and then hangs, but actually it is waiting. It seems like the messaging is incorrect. It should say “deleting” and if --wait is the default it should only print “deleted” when the object is actually deleted.
The way it works right now it seems like kubectl deletes things and then hangs for no reason, and it doesn’t give any indication that a state change has actually happened when the command finally terminates. (However ctrl-c and inspecting with kubectl get shows what is going on.) It’s also a little surprising that the whole operation is async and ctrl-c doesn’t stop it.)
I do think this is good default behavior it just needs better messaging.
@nikhiljindal I disagree with this statement for a couple reasons:
replicasets
anddeployments
are exceptions, not vice versa, because this behavior is driven by reapers which are optional (so by defaultkubectl
does not wait - for example,Secret
,ConfigMap
and CRDs don’t have reapers)deployments
the statement “kubectl delete always waits” is not 100% correct.Example: I added a custom finalizer
example.com/preventDeletion
to deployment, and the behavior was the following:deployment "nginx-deployment" deleted
, and it will remain there until I manually remove theexample.com/preventDeletion
finalizer.e.g. 7 minutes later: