kubeadm: kubeadm reset breaks CNI for others

What keywords did you search in kubeadm issues before filing this one?

  • reset
  • podman
  • /etc/cni/net.d

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"

  • Cloud provider or hardware configuration: VirtualBox
  • OS (e.g. from /etc/os-release): CentOS Linux 8 (Core)
  • Kernel (e.g. uname -a): 4.18.0-80.7.1.el8_0.x86_64
  • Others:

What happened?

When running kubeadm reset, it removed the entire /etc/cni/net.d directory.

This caused podman and any other tools using CNI networking to stop working…

What you expected to happen?

It should only remove the files that it installed itself, not the entire global directory.

In this case, it should only have removed /etc/cni/net.d/k8s.conf and not the rest.

How to reproduce it (as minimally and precisely as possible)?

sudo yum install podman sudo minikube start --vm-driver=none (calls kubeadm init) sudo minikube delete (calls kubeadm reset) sudo podman run busybox

Anything else we need to know?

To restore previous functionality, one has to do: sudo yum reinstall podman.

The code is in: https://github.com/kubernetes/kubernetes/blob/v1.16.0/cmd/kubeadm/app/cmd/phases/reset/cleanupnode.go#L84


EDIT: Updated to make it clear that the commands used are coming from minikube

See https://github.com/kubernetes/minikube/issues/5532 for the entire minikube context, running on CentOS 8.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 19 (12 by maintainers)

Most upvoted comments

@afbjorklund we discussed today during the kubeadm office hours and we decided that we are going to go with that proposal from @ereslibre :

I think this is a reasonable thing to do, not removing anything in that directory and printing a warning as we are doing with iptables rules.

I think this is a reasonable thing to do, not removing anything in that directory and printing a warning as we are doing with iptables rules.

yet, i’m seeing a breakage if both Calico and Weave configs are present in subsequent cluster creations.

added this as an agenda item for the office hours.