kubernetes: kubectl can't delete deployment
@kubernetes/sig-cli-bugs /sig area-kubectl
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: commands as below:
kubectl get all -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/tiller-deploy 0 0 0 0 42m
kubectl delete deployment tiller-deploy -n kube-system error: timed out waiting for the condition
kubectl delete deployment tiller-deploy -n kube-system --cascade=false deployment “tiller-deploy” deleted
kubectl get all -n kube-system NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deploy/tiller-deploy 0 0 0 0 47m
helm reset Error: could not find tiller
What you expected to happen:
deployment tiller-deploy deleted, the only way to delete it is minikube delete
How to reproduce it (as minimally and precisely as possible): see what happed above
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
):
kubectl version Client Version: version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.4”, GitCommit:“793658f2d7ca7f064d2bdf606519f9fe1229c381”, GitTreeState:“clean”, BuildDate:“2017-08-17T17:03:51Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.4”, GitCommit:“793658f2d7ca7f064d2bdf606519f9fe1229c381”, GitTreeState:“dirty”, BuildDate:“2017-08-25T10:31:26Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}
- Cloud provider or hardware configuration**:
minikube version minikube version: v0.21.0
-
OS (e.g. from /etc/os-release): macOS 10.12.6
-
Kernel (e.g.
uname -a
): Darwin fangs-MacBook-Pro.local 16.7.0 Darwin Kernel Version 16.7.0: Thu Jun 15 17:36:27 PDT 2017; root:xnu-3789.70.16~2/RELEASE_X86_64 x86_64 -
Install tools: installed by homebrew, and vm-driver is xhyve
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 30 (12 by maintainers)
Restart k8s master can solve this problem.
Seeing this in 1.10.0 with a stateful set. No finalizers. Restart didn’t help, the stateful set is still there though all the related other objects are gone (it is single-node test environment, so it restarts all of it).
@dfang Have you tried
kubectl delete deployment tiller-deploy -n kube-system --now
?From the log, it seems
kubectl
is waiting for the controller’s status (like observedGeneration, replicas etc) to update.