kubernetes: network plugin is not ready: cni config uninitialized

Hello, I want to do a fresh install of kubernetes via kubeadm, but when I start the install I’m stuck on

[apiclient] Created API client, waiting for the control plane to become ready

When I do a journalctl -xe I see :

Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

And I don’t why I get this error. I also tried to disable firewalld but no effect.

Environment:

  • Kubernetes version (use kubectl version): v1.7.0
  • Cloud provider or hardware configuration**:
  • OS (e.g. from /etc/os-release): CentOS 7
  • Kernel (e.g. uname -a): 3.10.0-514.26.2.el7.x86_64
  • Install tools: Kubeadm
  • Others: docker version : Docker version 17.06.0-ce, build 02c1d87 My RPM version :

kubeadm-1.7.0, kubectl-1.7.0, kubelet-1.7.0, kubernetes-cni-0.5.1

Thanks for your help

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 5
  • Comments: 83 (17 by maintainers)

Commits related to this issue

Most upvoted comments

It’s seem to working by removing the$KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

flanneld needs a fix for k8s 1.12. Use this PR (till will be approved): kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml it’s a known issue: https://github.com/coreos/flannel/issues/1044

removing $KUBELET_NETWORK_ARGS not work with me.

Note that KUBELET_NETWORK_ARGS is what tells kubelet which kind of network plugin to expect. If you remove it then kubelet expects no plugin, and therefore you get whatever the underlying container runtime gives you: typically Docker “bridge” networking.

This is fine in some cases, particularly if you only have one machine. It is not helpful if you actually want to use CNI.

please don’t change anything. Just run this command. The error will be gone.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

try to apply this plugin: kubectl apply --filename https://git.io/weave-kube-1.6 it works for me.

Hello K8s folks!

I had many times the same problem. For example - something went wrong during my K8s initialization and I had to use kubeadm reset and initialize K8s again. After run initialization command I got in kubelet log this error:

Jun 01 10:13:40 vncub0626 kubelet[18861]: I0601 10:13:40.665823   18861 kubelet.go:2102] Container runtime status: Runtime Conditions: RuntimeReady=true reason: message:, NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Jun 01 10:13:40 vncub0626 kubelet[18861]: E0601 10:13:40.665874   18861 kubelet.go:2105] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

… I was mad from this error message - nothing helped. So I said myself - at first the initialization run but the re-initialization didn’t. So it wasn’t caused by this line in Kubelet configuration: KUBELET_NETWORK_ARGS and I don’t agree with comment it. So I read kubelet log again and again… and finally I noticed in log next error message:

Jun 01 10:13:29 vncub0626 kubelet[18861]: E0601 10:13:29.376339   18861 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://10.96.22.11:6443/api/v1/services?limit=500&resourceVersion=0: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

This error was caused by bad ~/.kube/config file in home directory after previous initialization. After removing it I run initialization again… and voilá… initialization finished successfully. :]

… hope it helps to someone else because this error is nightmare and it’s not almost possible to determine its cause.

I have this issue on CentOS 7.5 with k8s 1.12

Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env works fine

Repeating for visibility:

Note that KUBELET_NETWORK_ARGS is what tells kubelet which kind of network plugin to expect. If you remove it then kubelet expects no plugin, and therefore you get whatever the underlying container runtime gives you: typically Docker “bridge” networking.

This is fine in some cases, particularly if you only have one machine. It is not helpful if you actually want to use CNI.

Removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf works for me. Thanks @PLoic

you add --network-plugin=cni in your kueblet start conf in my system: 1.vim /etc/systemd/system/kubelet.service 2.delete --network-plugin=cni 3.restart kubelet (systemctl daemon-reload; systemctl restart kubelet)

please do 3 step , in your system,maybe your installtion different from me ,please do it like this way

In my case when i am init kubernetes master i have had this issue. After delete all data in etcd the process init was successful On all etcd nodes: systemctl stop etcd rm -rf /var/lib/etcd/* systemctl daemon-reload && systemctl enable etcd && systemctl start

flanneld needs a fix for k8s 1.12. Use this PR (till will be approved): kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml it’s a known issue: coreos/flannel#1044

Additional:
I think this problem cause by kuberadm first init coredns but not init flannel,so it throw “network plugin is not ready: cni config uninitialized”.
Solution:

  1. Install flannel by kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
  2. Reset the coredns pod
    kubectl delete coredns-xx-xx
  3. Then run kubectl get pods to see if it works.

if you see this error “cni0” already has an IP address different from 10.244.1.1/24". follow this:

ifconfig  cni0 down
brctl delbr cni0
ip link delete flannel.1

if you see this error “Back-off restarting failed container”, and you can get the log by

root@master:/home/moonx/yaml# kubectl logs coredns-86c58d9df4-x6m9w -n=kube-system
.:53
2019-01-22T08:19:38.255Z [INFO] CoreDNS-1.2.6
2019-01-22T08:19:38.255Z [INFO] linux/amd64, go1.11.2, 756749c
CoreDNS-1.2.6
linux/amd64, go1.11.2, 756749c
 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1599094102175870692.6819166615156126341.".

Then you can see the file “/etc/resolv.conf” on the failed node, if the nameserver is localhost there will be a loopback.Change to:

#nameserver 127.0.1.1
nameserver 8.8.8.8

run follow command works well

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

kubeadm version v1.10.3

Hello,

For information, I had this issue: Kubernetes introduces RBAC since v1.6, we need to create correspond Service Account, RBAC rules and flannel daemonset so that kubelet can communicate with api server correctly.

You have to run:

$ kubectl create -f https://raw.githubusercontent.com/coreos/flannel/v0.9.0/Documentation/kube-flannel.yml

I hope it helps.

I had same problem of node status NotReady with my kubernetes node, below is what worked for me

OS Version: Ubuntu Server 20.10 I commented out #KUBELET_KUBEADM_ARGS=“–network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2” Removing from /var/lib/kubelet/kubeadm-flags.env works fine

@bilalx20 i can confirm that - flannel is broken for me too in 1.12. what you can do is try weave or callico, they work.

OS (e.g. from /etc/os-release): CentOS 7

If you see the /etc/cni/net.d directory on the node empty despite the fact that the Pod Network Provider pod is running on it, try to setenforce 0 and delete the Pod Network Provider pod. k8s will restart it and, hopefully, now it will be able to copy its config.

@PLoic at step 3 which pod network did you install? There are various choices, and troubleshooting after that depends on the specific case.

@mdzddl you deleted --network-plugin=cni cause kubelet complains about cni? Not so clever. Deleting the default network plugin is not recommended at all.

I’m using VMWare to create cluster Kubernetes and get same error, after shutdown and restart VM it works!

please don’t change anything. Just run this command. The error will be gone.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

thanks!

Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env also works on CentOS 7.6 with k8s 1.16.8

I have this issue on Ubuntu 16.04 with k8s 1.16 (I run ubuntu on vagrant)

Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env works fine

flannel has not updated their manifest to comply with the latest changes in k8s 1.16. try a different CNI plugin, like Calico or WeaveNet.

Nobody mentioned SELinux yet. I got this error when running kubeadm join on a Centos 7 machine with SELinux in Enforcing mode. Setting setenforce 0 and reruning kubeadm fixed my problem.

I am seeing the exact same error with kubeadm, where it is struck forever at:

[apiclient] Created API client, waiting for the control plane to become ready

In the “journalctl -r -u kubelet” I see these lines over and over: Aug 31 16:34:41 k8smaster1 kubelet[8876]: E0831 16:34:41.499982 8876 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Aug 31 16:34:41 k8smaster1 kubelet[8876]: W0831 16:34:41.499746 8876 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d

Version details are: `kubeadm version: &version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.4”, GitCommit:“793658f2d7ca7f064d2bdf606519f9fe1229c381”, GitTreeState:“clean”, BuildDate:“2017-08-17T08:30:51Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}

Kubectl version: Client Version: version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.4”, GitCommit:“793658f2d7ca7f064d2bdf606519f9fe1229c381”, GitTreeState:“clean”, BuildDate:“2017-08-17T08:48:23Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}`

OS Details are: Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo) Kernel: Linux 3.10.0-514.el7.x86_64 Architecture: x86-64 Any help is very much appreciated!

All nodes install:

sudo mkdir -p /opt/cni/bin
cd /opt/cni/bin
sudo curl -L --insecure -O https://github.com/containernetworking/plugins/releases/download/v0.8.6/cni-plugins-linux-amd64-v0.8.6.tgz
sudo gunzip cni-plugins-linux-amd64-v0.8.6.tgz
sudo tar -xvf cni-plugins-linux-amd64-v0.8.6.tar

I’m copy greate node /opt/cni/bin/flannel to bad node folder ,but node status is NotReady too, exec describe node log below

Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
  ----             ------    -----------------                 ------------------                ------              -------
  MemoryPressure   Unknown   Wed, 22 Mar 2023 20:16:52 +0800   Wed, 22 Mar 2023 20:18:47 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  DiskPressure     Unknown   Wed, 22 Mar 2023 20:16:52 +0800   Wed, 22 Mar 2023 20:18:47 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  PIDPressure      Unknown   Wed, 22 Mar 2023 20:16:52 +0800   Wed, 22 Mar 2023 20:18:47 +0800   NodeStatusUnknown   Kubelet stopped posting node status.
  Ready            Unknown   Wed, 22 Mar 2023 20:16:52 +0800   Wed, 22 Mar 2023 20:18:47 +0800   NodeStatusUnknown   Kubelet stopped posting node status.

@mdzddl you deleted --network-plugin=cni cause kubelet complains about cni? Not so clever. Deleting the default network plugin is not recommended at all.

Then what is the solution

flanneld needs a fix for k8s 1.12. Use this PR (till will be approved): kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml it’s a known issue: coreos/flannel#1044

Works for me !

I’ve this issue on my Ubuntu 16.04 with k8s 1.12.

Downgrade to the 1.11.0 and everything is up and running.

Hi,

If you comment $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and restart the service/sever or if you kubeadm reset and join again/kubeadm init to recreate the cluster and join the nodes again,

pods will be in running state but if you describe the kube-dns pod you will see

Warning Unhealthy 1m (x4 over 2m) kubelet, master Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused

the complete output as below.

Events: Type Reason Age From Message


Normal Scheduled 10m default-scheduler Successfully assigned kube-dns-6f4fd4bdf-qxmzn to master Normal SuccessfulMountVolume 10m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-token-47fpd” Normal SuccessfulMountVolume 10m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-config” Normal Pulling 10m kubelet, master pulling image “gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7” Normal Pulled 10m kubelet, master Successfully pulled image “gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7” Normal Created 10m kubelet, master Created container Normal Started 10m kubelet, master Started container Normal Pulling 10m kubelet, master pulling image “gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7” Normal Pulled 10m kubelet, master Successfully pulled image “gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7” Normal Created 10m kubelet, master Created container Normal Started 10m kubelet, master Started container Normal Pulling 10m kubelet, master pulling image “gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7” Normal Pulled 10m kubelet, master Successfully pulled image “gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7” Normal Created 10m kubelet, master Created container Normal Started 10m kubelet, master Started container Normal SuccessfulMountVolume 2m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-token-47fpd” Normal SuccessfulMountVolume 2m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-config” Normal SandboxChanged 2m kubelet, master Pod sandbox changed, it will be killed and re-created. Normal Pulled 2m kubelet, master Container image “gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7” already present on machine Normal Created 2m kubelet, master Created container Normal Started 2m kubelet, master Started container Normal Created 2m kubelet, master Created container Normal Pulled 2m kubelet, master Container image “gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7” already present on machine Normal Started 2m kubelet, master Started container Normal Pulled 2m kubelet, master Container image “gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7” already present on machine Normal Created 2m kubelet, master Created container Normal Started 2m kubelet, master Started container Warning Unhealthy 1m (x4 over 2m) kubelet, master Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused

docker@master:~$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE etcd-master 1/1 Running 1 14m kube-apiserver-master 1/1 Running 1 14m kube-controller-manager-master 1/1 Running 1 14m kube-dns-6f4fd4bdf-qxmzn 3/3 Running 3 15m kube-proxy-d54fk 1/1 Running 1 15m kube-scheduler-master 1/1 Running 1 14m