kubeadm: kubeadm reset doesn't work properly with containerd
BUG REPORT
Versions
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:“1”, Minor:“11+”, GitVersion:“v1.11.0-beta.2”, GitCommit:“be2cfcf9e44b5162a294e977329d6c8194748c4e”, GitTreeState:“clean”, BuildDate:“2018-06-07T16:19:15Z”, GoVersion:“go1.10.2”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
- Kubernetes version (use
kubectl version): Client Version: version.Info{Major:“1”, Minor:“11+”, GitVersion:“v1.11.0-beta.2”, GitCommit:“be2cfcf9e44b5162a294e977329d6c8194748c4e”, GitTreeState:“clean”, BuildDate:“2018-06-07T16:21:58Z”, GoVersion:“go1.10.2”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“11+”, GitVersion:“v1.11.0-beta.2”, GitCommit:“be2cfcf9e44b5162a294e977329d6c8194748c4e”, GitTreeState:“clean”, BuildDate:“2018-06-07T16:13:01Z”, GoVersion:“go1.10.2”, Compiler:“gc”, Platform:“linux/amd64”} - Cloud provider or hardware configuration: Brightbox
- OS (e.g. from /etc/os-release): Ubuntu 18.04 LTS
- Kernel (e.g.
uname -a): Linux srv-4pp5n 4.15.0-23-generic #25-Ubuntu SMP Wed May 23 18:02:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux - Others: containerd github.com/containerd/containerd v1.1.1-rc.0 395068d2b7256518259816ae19e45824b15da071
What happened?
[reset] cleaning up running containers using crictl with socket /var/run/containerd/containerd.sock
[reset] failed to stop the running containers using crictl: exit status 1. Trying to use docker instead[reset] docker doesn't seem to be running. Skipping the removal of running Kubernetes containers
The containerd logs show the request for some odd named pods
Jun 15 16:14:59 srv-4pp5n containerd[2416]: time="2018-06-15T16:14:59Z" level=info msg="StopPodSandbox for "W0615""
Jun 15 16:14:59 srv-4pp5n containerd[2416]: time="2018-06-15T16:14:59Z" level=error msg="StopPodSandbox for "W0615" failed" error="an error occurred when try to find sandbox "W0615": does not exist"
What you expected to happen?
kubeadm reset ought to read the config file initially used with init (so pick up the cri-socket) and then correctly remove the pods
How to reproduce it (as minimally and precisely as possible)?
kubeadm init --config=kubeadm.conf with
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 0.0.0.0
networking:
serviceSubnet: fd00:1234::/110
kubernetesVersion: 1.11.0-beta.2
cloudProvider: external
featureGates:
"CoreDNS": false
criSocket: /var/run/containerd/containerd.sock
then
kubeadm reset --cri-socket=/var/run/containerd/containerd.sock
Anything else we need to know?
If you run kubeadm reset without the cri-socket it still tries to talk to docker and misses the containers completely. Isn’t the choice to use containerd in the k8s databases?
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 26 (7 by maintainers)
Commits related to this issue
- kubeadm: fix CRI ListKubeContainers API Current implementation of this API always returns checks output of 'crictl pods -q' and filters out everything that doesn't start with k8s_. 'crictl pods -q' r... — committed to bart0sh/kubernetes by bart0sh 6 years ago
- Merge pull request #67017 from bart0sh/PR0027-kubeadm-fix-CRI-ListKubeContainers Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instruc... — committed to kubernetes/kubernetes by deleted user 6 years ago
So I’m firmly in the camp that we can address this in 1.12, but let’s be clear.
Folks are advertising 1st class CRIs with absolutely 0 upstream CI-signal. Until that core issue is addressed, multiple CRIs are best effort on break-fix.
/cc @runcom @countspongebob @kubernetes/sig-node-feature-requests @Random-Liu @BenTheElder
So, where we are with this issue?
@NeilW Can you check if current master works for you? There were quite a bit of changes in this area recently, so this issue can be already fixed.