kubeadm: Pods are not getting cleaned in CRI-O after running kubeadm reset.
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
What happened?
Kubeadm is not able to remove CRI-O pods after running kubeadm reset
.
[reset] Cleaning up running containers using crictl with socket /var/run/dockershim.sock
[reset] Failed to list running pods using crictl. Trying using docker instead.
What you expected to happen?
Kubeadm reset command should remove all the pods running in CRI-O.
Versions
kubeadm version (use kubeadm version
):
kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.4 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.4 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
- Kernel (e.g.
uname -a
):
Linux tcs07 4.13.0-45-generic #50~16.04.1-Ubuntu SMP Wed May 30 11:18:27 UTC 2018 x86_64 6_64 x86_64 GNU/Linux
- Others:
crictl version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.11.0-dev
RuntimeApiVersion: v1alpha1
How to reproduce it (as minimally and precisely as possible)?
Follow the documentation to use CRI-O and kubeadm 1.10
After that run kubectl reset
Anything else we need to know?
Manually deleting the pods using command crictl stopp
and crictl rmp
throws the following error as the containers doesn’t exist for the respective pods.
FATA[0003] removing the pod sandbox "0e92bc6230132" failed: rpc error: code = Unknown desc = failed to delete container k8s_sidecar_kube-dns-86f4d74b45-m8j65_kube-system_2ee9d374-72bf-11e8-9f37-8851fb5bd321_0 in pod sandbox 0e92bc6230132e219cf6281cc83bb9a64bbeb509f5453664f04c3f2d21eaf7c1: `/usr/bin/kata-runtime delete --force 00affa7ac08dc095d33eb5dcea82d524eb8e6615cf5f1129bfc333bf1674b6f2` failed: Container ID (00affa7ac08dc095d33eb5dcea82d524eb8e6615cf5f1129bfc333bf1674b6f2) does not exist
crictl pods
shows the state NotReady
for the pods. And there are no running containers in CRI-O
Output of crictl pods
POD ID CREATED STATE NAME NAMESPACE ATTEMPT
809ccd63d3077 2 minutes ago Ready kube-controller-manager-tcs07 kube-system 1
424c3c84018fe 2 minutes ago Ready kube-apiserver-tcs07 kube-system 1
a20d407419f27 2 minutes ago Ready etcd-tcs07 kube-system 1
d1a74574fcee6 2 minutes ago Ready kube-scheduler-tcs07 kube-system 1
0e92bc6230132 3 hours ago NotReady kube-dns-86f4d74b45-m8j65 kube-system 0
c4f5206f13167 3 hours ago NotReady weave-net-9btcs kube-system 0
ad7e3b1294842 3 hours ago NotReady kube-proxy-jcx8f kube-system 0
0919b6c873299 3 hours ago NotReady etcd-tcs07 kube-system 0
0d7e3244dc47d 3 hours ago NotReady kube-controller-manager-tcs07 kube-system 0
3e9fd59293f25 3 hours ago NotReady kube-scheduler-tcs07 kube-system 0
e65d6434816a6 4 days ago NotReady kube-dns-86f4d74b45-fl2wv kube-system 0
b5c5a6a528b66 4 days ago NotReady php-apache-8699449574-sjqht default 2
5b21bedec56c3 4 days ago NotReady php-apache-8699449574-sjqht default 0
7ad55bd09c86e 4 days ago NotReady php-apache-8699449574-l7bt2 default 0
5e8e20d630fee 4 days ago NotReady metrics-server-6fbfb84cdd-6nf7z kube-system 0
cf979e1d7130a 5 days ago NotReady kube-dns-86f4d74b45-5mfj6 kube-system 0
Sometimes there are same set of pods running like two kube-dns
pods running and showing the status Ready
at the same time
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 20 (2 by maintainers)
Worked fine when I used
kubeadm reset --cri-socket=unix:///run/containerd/containerd.sock -v1
, and all pod had deleted