rancher: Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

I deployed Rancher cluster v2.0.0. on a private network and none of the nodes are available due to the following error: Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Tried all available networks (Canal, Calico, Flannel) without any success.

Could this be related to self signed certificates? And if yes what is the proper way to install them? Instructions on setting up SSL from this link did not work: https://medium.com/@superseb/ssl-tls-options-for-rancher-2-0-dca483a7070d


Useful Info
Versions Rancher v2.0.0 UI: v2.0.41
Access local admin
Route authenticated.cluster.index

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 18 (1 by maintainers)

Most upvoted comments

same issue.

E1025 09:02:22.614850   11254 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
W1025 09:02:27.617176   11254 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d
E1025 09:02:27.617475   11254 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
W1025 09:02:32.619714   11254 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d

and this is my solution.

mkdir -p /etc/cni/net.d
cat > /etc/cni/net.d/10-flannel.conflist <<EOF
{
    "name": "cbr0",
    "plugins": [
        {
            "type": "flannel",
            "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
            }
        },
        {
            "type": "portmap",
            "capabilities": {
                "portMappings": true
            }
        }
    ]
}

Hello,

This problem also occurs when deleting a node from a cluster then restarting it. Running the cleanup script, rebooting the host does not fix the problem. Does anyone have a similar problem?

TO anyone out there finding this issue and the solution of creating the 10-flannel.conflist file doesn’t work, I was able to fix it right away by installing calico CNI by just running

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

After some seconds all nodes were in ready state.

Usually a network problem. How did you deployed your Rancher cluster? Instructions for setting it up are here: https://rancher.com/docs/rancher/v2.x/en/installation/single-node-install/#choose-how-you-want-to-use-ssl

Be sure to cleanup nodes when re-using them in a setup: https://gist.github.com/superseb/2cf186726807a012af59a027cb41270d

If you have done all of that, usually the pod status shows what’s going on:

kubectl get pods --all-namespaces

Wow, @laoshancun’s solution fixed the issue instantly. Is this something that should be PR’d into a release?

@spstratis Only on the nodes that have same trouble.

Removing the $KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf works for me.

To anyone out there running rancher 2.4.8 on CentOS 7 in combination with calico. I had to create a /etc/resolv-kubernetes/resolv.conf with my nameservers in it.