kubernetes: network plugin is not ready: cni config uninitialized
Hello, I want to do a fresh install of kubernetes via kubeadm, but when I start the install I’m stuck on
[apiclient] Created API client, waiting for the control plane to become ready
When I do a journalctl -xe
I see :
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
And I don’t why I get this error. I also tried to disable firewalld but no effect.
Environment:
- Kubernetes version (use
kubectl version
): v1.7.0 - Cloud provider or hardware configuration**:
- OS (e.g. from /etc/os-release): CentOS 7
- Kernel (e.g.
uname -a
): 3.10.0-514.26.2.el7.x86_64 - Install tools: Kubeadm
- Others:
docker version :
Docker version 17.06.0-ce, build 02c1d87
My RPM version :
kubeadm-1.7.0, kubectl-1.7.0, kubelet-1.7.0, kubernetes-cni-0.5.1
Thanks for your help
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 5
- Comments: 83 (17 by maintainers)
Commits related to this issue
- Update Vagrantfile see: https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-314994545 — committed to sysarcher/k8s-resources by deleted user 6 years ago
It’s seem to working by removing the
$KUBELET_NETWORK_ARGS
in/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
flanneld needs a fix for k8s 1.12. Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it’s a known issue: https://github.com/coreos/flannel/issues/1044removing $KUBELET_NETWORK_ARGS not work with me.
Note that
KUBELET_NETWORK_ARGS
is what tells kubelet which kind of network plugin to expect. If you remove it then kubelet expects no plugin, and therefore you get whatever the underlying container runtime gives you: typically Docker “bridge” networking.This is fine in some cases, particularly if you only have one machine. It is not helpful if you actually want to use CNI.
please don’t change anything. Just run this command. The error will be gone.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
try to apply this plugin: kubectl apply --filename https://git.io/weave-kube-1.6 it works for me.
Hello K8s folks!
I had many times the same problem. For example - something went wrong during my K8s initialization and I had to use
kubeadm reset
and initialize K8s again. After run initialization command I got in kubelet log this error:… I was mad from this error message - nothing helped. So I said myself - at first the initialization run but the re-initialization didn’t. So it wasn’t caused by this line in Kubelet configuration:
KUBELET_NETWORK_ARGS
and I don’t agree with comment it. So I read kubelet log again and again… and finally I noticed in log next error message:This error was caused by bad
~/.kube/config
file in home directory after previous initialization. After removing it I run initialization again… and voilá… initialization finished successfully. :]… hope it helps to someone else because this error is nightmare and it’s not almost possible to determine its cause.
I have this issue on CentOS 7.5 with k8s 1.12
Removing the
cni
plugin conf from/var/lib/kubelet/kubeadm-flags.env
works fineRepeating for visibility:
Note that
KUBELET_NETWORK_ARGS
is what tells kubelet which kind of network plugin to expect. If you remove it then kubelet expects no plugin, and therefore you get whatever the underlying container runtime gives you: typically Docker “bridge” networking.This is fine in some cases, particularly if you only have one machine. It is not helpful if you actually want to use CNI.
Removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf works for me. Thanks @PLoic
you add --network-plugin=cni in your kueblet start conf in my system: 1.vim /etc/systemd/system/kubelet.service 2.delete --network-plugin=cni 3.restart kubelet (systemctl daemon-reload; systemctl restart kubelet)
please do 3 step , in your system,maybe your installtion different from me ,please do it like this way
In my case when i am init kubernetes master i have had this issue. After delete all data in etcd the process init was successful On all etcd nodes: systemctl stop etcd rm -rf /var/lib/etcd/* systemctl daemon-reload && systemctl enable etcd && systemctl start
Additional:
I think this problem cause by kuberadm first init coredns but not init flannel,so it throw “network plugin is not ready: cni config uninitialized”.
Solution:
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
kubectl delete coredns-xx-xx
kubectl get pods
to see if it works.if you see this error “cni0” already has an IP address different from 10.244.1.1/24". follow this:
if you see this error “Back-off restarting failed container”, and you can get the log by
Then you can see the file “/etc/resolv.conf” on the failed node, if the nameserver is localhost there will be a loopback.Change to:
run follow command works well
kubeadm version
v1.10.3
Hello,
For information, I had this issue: Kubernetes introduces RBAC since v1.6, we need to create correspond Service Account, RBAC rules and flannel daemonset so that kubelet can communicate with api server correctly.
You have to run:
I hope it helps.
I had same problem of node status NotReady with my kubernetes node, below is what worked for me
OS Version: Ubuntu Server 20.10 I commented out #KUBELET_KUBEADM_ARGS=“–network-plugin=cni --pod-infra-container-image=k8s.gcr.io/pause:3.2” Removing from /var/lib/kubelet/kubeadm-flags.env works fine
@bilalx20 i can confirm that - flannel is broken for me too in 1.12. what you can do is try weave or callico, they work.
If you see the
/etc/cni/net.d
directory on the node empty despite the fact that the Pod Network Provider pod is running on it, try tosetenforce 0
and delete the Pod Network Provider pod. k8s will restart it and, hopefully, now it will be able to copy its config.uses https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
which should include both the cniVersion and apps/v1 fixes.
@PLoic at step 3 which pod network did you install? There are various choices, and troubleshooting after that depends on the specific case.
@mdzddl you deleted --network-plugin=cni cause kubelet complains about cni? Not so clever. Deleting the default network plugin is not recommended at all.
I’m using VMWare to create cluster Kubernetes and get same error, after shutdown and restart VM it works!
thanks!
Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env also works on CentOS 7.6 with k8s 1.16.8
I have this issue on Ubuntu 16.04 with k8s 1.16 (I run ubuntu on vagrant)
Removing the cni plugin conf from /var/lib/kubelet/kubeadm-flags.env works fine
flannel has not updated their manifest to comply with the latest changes in k8s 1.16. try a different CNI plugin, like Calico or WeaveNet.
@jdanekrh https://github.com/kubernetes/kubernetes/issues/48798#issuecomment-359193792
Nobody mentioned SELinux yet. I got this error when running
kubeadm join
on a Centos 7 machine with SELinux in Enforcing mode. Settingsetenforce 0
and reruning kubeadm fixed my problem.I am seeing the exact same error with kubeadm, where it is struck forever at:
[apiclient] Created API client, waiting for the control plane to become ready
In the “journalctl -r -u kubelet” I see these lines over and over:
Aug 31 16:34:41 k8smaster1 kubelet[8876]: E0831 16:34:41.499982 8876 kubelet.go:2136] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Aug 31 16:34:41 k8smaster1 kubelet[8876]: W0831 16:34:41.499746 8876 cni.go:189] Unable to update cni config: No networks found in /etc/cni/net.d
Version details are: `kubeadm version: &version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.4”, GitCommit:“793658f2d7ca7f064d2bdf606519f9fe1229c381”, GitTreeState:“clean”, BuildDate:“2017-08-17T08:30:51Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}
Kubectl version: Client Version: version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.4”, GitCommit:“793658f2d7ca7f064d2bdf606519f9fe1229c381”, GitTreeState:“clean”, BuildDate:“2017-08-17T08:48:23Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}`
OS Details are:
Operating System: Red Hat Enterprise Linux Server 7.3 (Maipo) Kernel: Linux 3.10.0-514.el7.x86_64 Architecture: x86-64
Any help is very much appreciated!I’m copy greate node /opt/cni/bin/flannel to bad node folder ,but node status is NotReady too, exec describe node log below
Then what is the solution
Works for me !
I’ve this issue on my Ubuntu 16.04 with k8s 1.12.
Downgrade to the 1.11.0 and everything is up and running.
Hi,
If you comment $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf and restart the service/sever or if you kubeadm reset and join again/kubeadm init to recreate the cluster and join the nodes again,
pods will be in running state but if you describe the kube-dns pod you will see
Warning Unhealthy 1m (x4 over 2m) kubelet, master Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused
the complete output as below.
Events: Type Reason Age From Message
Normal Scheduled 10m default-scheduler Successfully assigned kube-dns-6f4fd4bdf-qxmzn to master Normal SuccessfulMountVolume 10m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-token-47fpd” Normal SuccessfulMountVolume 10m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-config” Normal Pulling 10m kubelet, master pulling image “gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7” Normal Pulled 10m kubelet, master Successfully pulled image “gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7” Normal Created 10m kubelet, master Created container Normal Started 10m kubelet, master Started container Normal Pulling 10m kubelet, master pulling image “gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7” Normal Pulled 10m kubelet, master Successfully pulled image “gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7” Normal Created 10m kubelet, master Created container Normal Started 10m kubelet, master Started container Normal Pulling 10m kubelet, master pulling image “gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7” Normal Pulled 10m kubelet, master Successfully pulled image “gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7” Normal Created 10m kubelet, master Created container Normal Started 10m kubelet, master Started container Normal SuccessfulMountVolume 2m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-token-47fpd” Normal SuccessfulMountVolume 2m kubelet, master MountVolume.SetUp succeeded for volume “kube-dns-config” Normal SandboxChanged 2m kubelet, master Pod sandbox changed, it will be killed and re-created. Normal Pulled 2m kubelet, master Container image “gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.7” already present on machine Normal Created 2m kubelet, master Created container Normal Started 2m kubelet, master Started container Normal Created 2m kubelet, master Created container Normal Pulled 2m kubelet, master Container image “gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.7” already present on machine Normal Started 2m kubelet, master Started container Normal Pulled 2m kubelet, master Container image “gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.7” already present on machine Normal Created 2m kubelet, master Created container Normal Started 2m kubelet, master Started container Warning Unhealthy 1m (x4 over 2m) kubelet, master Readiness probe failed: Get http://172.17.0.2:8081/readiness: dial tcp 172.17.0.2:8081: getsockopt: connection refused
docker@master:~$ kubectl get pods --namespace=kube-system NAME READY STATUS RESTARTS AGE etcd-master 1/1 Running 1 14m kube-apiserver-master 1/1 Running 1 14m kube-controller-manager-master 1/1 Running 1 14m kube-dns-6f4fd4bdf-qxmzn 3/3 Running 3 15m kube-proxy-d54fk 1/1 Running 1 15m kube-scheduler-master 1/1 Running 1 14m