kubernetes: Unable to update cni config: No networks found in /etc/cni/net.d
OS version: debian jessie docker version: 1.12.6 kube version: 1.8.2
when I run kubeadm init, the rsyslog reports:
Unable to update cni config: No networks found in /etc/cni/net.d
and the progress hangs:
# kubeadm init --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.8.2
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [uy05-13 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 122.14.206.195]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by that:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection; so the kubelet can't pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.8.2
- gcr.io/google_containers/kube-controller-manager-amd64:v1.8.2
- gcr.io/google_containers/kube-scheduler-amd64:v1.8.2
You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
# journalctl -xeu kubelet
Nov 01 17:03:44 uy05-13 kubelet[6504]: W1101 05:03:44.705009 6504 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 01 17:03:44 uy05-13 kubelet[6504]: E1101 05:03:44.705250 6504 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message
Nov 01 17:03:45 uy05-13 kubelet[6504]: E1101 05:03:45.218024 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://1
Nov 01 17:03:45 uy05-13 kubelet[6504]: E1101 05:03:45.218767 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://122.1
Nov 01 17:03:45 uy05-13 kubelet[6504]: E1101 05:03:45.219872 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://122.14.2
Nov 01 17:03:46 uy05-13 kubelet[6504]: E1101 05:03:46.218859 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://1
Nov 01 17:03:46 uy05-13 kubelet[6504]: E1101 05:03:46.219898 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://122.1
Nov 01 17:03:46 uy05-13 kubelet[6504]: E1101 05:03:46.221100 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://122.14.2
Nov 01 17:03:46 uy05-13 kubelet[6504]: E1101 05:03:46.959754 6504 event.go:209] Unable to write event: 'Patch https://122.14.206.195:6443/api/v1/namespaces/default/events/uy05
Nov 01 17:03:46 uy05-13 kubelet[6504]: E1101 05:03:46.959791 6504 event.go:144] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.Ob
Nov 01 17:03:46 uy05-13 kubelet[6504]: E1101 05:03:46.960729 6504 event.go:209] Unable to write event: 'Patch https://122.14.206.195:6443/api/v1/namespaces/default/events/uy05
Nov 01 17:03:47 uy05-13 kubelet[6504]: E1101 05:03:47.219635 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://1
Nov 01 17:03:47 uy05-13 kubelet[6504]: E1101 05:03:47.220689 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:413: Failed to list *v1.Service: Get https://122.1
Nov 01 17:03:47 uy05-13 kubelet[6504]: E1101 05:03:47.221879 6504 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://122.14.2
I installed kubernetes-cni by deb package:
# dpkg -i kubernetes-cni_0.5.1-00_amd64.deb
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 18
- Comments: 52 (6 by maintainers)
I’m a Kubernetes novice / don’t really know what I’m doing but
The node becomes ‘ready’ in the master. No idea why. The Kubelet service is running as root, would not have thought it would make a difference. Using weave net.
This is due to the proxy issues. kubelet cannot connect to the kube-apiserve through your configured http proxy. You can fix this either by
unset http_proxy https_proxyorexport no_proxy=<your_kube_apiserver_ip>There also has a blog to tell about this issue for Chinese reader: https://zhuanlan.zhihu.com/p/31398416FYI, I ran into the same issue and the following worked:
#re-deploy weave network (in my case) export kubever=$(kubectl version | base64 | tr -d ‘\n’) kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$kubever” …then… systemctl restart docker && systemctl restart kubelet
I ran into exactly the same issue. OS: ubuntu 16.04, kubeadm: 1.8.4, docker: 17.05
in my case 1 node of 3 nodes was working. I solve the issue by creating a directory mkdir -p /etc/cni/net.d
and creating a file with these contents
cat 10-flannel.conflist { “name”: “cbr0”, “plugins”: [ { “type”: “flannel”, “delegate”: { “hairpinMode”: true, “isDefaultGateway”: true } }, { “type”: “portmap”, “capabilities”: { “portMappings”: true } } ] }
same issue on CentOS7 Dec 03 14:49:47 devops-ucdp.novalocal kubelet[20242]: W1203 14:49:47.446270 20242 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d Dec 03 14:49:47 devops-ucdp.novalocal kubelet[20242]: E1203 14:49:47.446454 20242 kubelet.go:2095] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized Dec 03 14:49:47 devops-ucdp.novalocal kubelet[20242]: E1203 14:49:47.473822 20242 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:422: Failed to list *v1.Node: Get https://192.168.61.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddevops-ucdp.novalocal&resourceVersion=0: dial tcp 192.168.61.11:6443: i/o timeout Dec 03 14:49:47 devops-ucdp.novalocal kubelet[20242]: E1203 14:49:47.473876 20242 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.61.11:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddevops-ucdp.novalocal&resourceVersion=0: dial tcp 192.168.61.11:6443: i/o timeout
I my case it could not read one file from
/etc/cni/net.d/. I just add read permission and it solve my problem.works for me,thanks
I had this problem when issuing
kubeadm initafterkubeadm reset(weave as network plugin). haven’t realized why but could fix it by deleting weave images from docker. don’t forget to restart docker and kubeletI resolve the issue like this:
The only solution that worked for me on v1.18.2 with centos 7.
I tried both weave and flannel both the same problem.
Thanks a million 😃
I got same issue and fixed it by : 1- wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz 2- sudo tar -xzvf cni-plugins-amd64-v0.7.5.tgz --directory /opt/cni/bin/ 3- sudo systemctl restart kubelet
Thanks, it works
this is my situation when i run kubeadm init --apiserver-advertise-address=0.0.0.0 --kubernetes-version=1.10.0 --pod-network-cidr 10.244.0.0/16 i get err here: [root@k8s-master kube]# kubeadm init --apiserver-advertise-address=0.0.0.0 --kubernetes-version=1.10.0 --pod-network-cidr 10.244.0.0/16 [init] Using Kubernetes version: v1.10.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [certificates] Using the existing ca certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing etcd/ca certificate and key. [certificates] Using the existing etcd/server certificate and key. [certificates] Using the existing etcd/peer certificate and key. [certificates] Using the existing etcd/healthcheck-client certificate and key. [certificates] Using the existing apiserver-etcd-client certificate and key. [certificates] Using the existing sa key. [certificates] Using the existing front-proxy-ca certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/admin.conf” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/kubelet.conf” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/controller-manager.conf” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/scheduler.conf” [controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml” [controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml” [controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml” [etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml” [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”. [init] This might take a minute or longer if the control plane images have to be pulled.
Unfortunately, an error has occurred: timed out waiting for the condition
This error is likely caused by:
The kubelet is not running The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled) Either there is no internet connection, or imagePullPolicy is set to “Never”, so the kubelet cannot pull or find the following control plane images: k8s.gcr.io/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/etcd-amd64:3.1.12 (only if no external etcd endpoints are configured) If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
‘systemctl status kubelet’ ‘journalctl -xeu kubelet’ couldn’t initialize a Kubernetes cluster I thought i don not hava the images, so i docker pull all the images , [root@k8s-master images]# docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 days ago 97 MB k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 days ago 225 MB k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 days ago 50.4 MB k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 days ago 148 MB k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 3 weeks ago 193 MB k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.7 db76ee297b85 5 months ago 42 MB k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.7 5d049a8c4eec 5 months ago 50.3 MB k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.7 5feec37454f4 5 months ago 41 MB k8s.gcr.io/pause-amd64 3.0 99e59f495ffa 23 months ago 747 kB ,and then i still got the err, so i run systemctl status kubelet [root@k8s-master images]# systemctl status kubelet -l ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since Mon 2018-04-02 03:46:10 CST; 13min ago Docs: http://kubernetes.io/docs/ Main PID: 12180 (kubelet) Memory: 33.2M CGroup: /system.slice/kubelet.service └─12180 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=systemd --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki Apr 02 03:59:43 k8s-master kubelet[12180]: E0402 03:59:43.623741 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:44 k8s-master kubelet[12180]: E0402 03:59:44.622818 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:44 k8s-master kubelet[12180]: E0402 03:59:44.623521 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:44 k8s-master kubelet[12180]: E0402 03:59:44.624732 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:45 k8s-master kubelet[12180]: E0402 03:59:45.624634 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:45 k8s-master kubelet[12180]: E0402 03:59:45.625524 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:45 k8s-master kubelet[12180]: E0402 03:59:45.626470 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:46 k8s-master kubelet[12180]: E0402 03:59:46.625500 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:46 k8s-master kubelet[12180]: E0402 03:59:46.626475 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused Apr 02 03:59:46 k8s-master kubelet[12180]: E0402 03:59:46.627449 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.1.6:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dk8s-master&limit=500&resourceVersion=0: dial tcp 192.168.1.6:6443: getsockopt: connection refused
journalctl -xeu kubelet [root@k8s-master ~]# journalctl -xeu kubelet Apr 02 04:00:04 k8s-master kubelet[12180]: E0402 04:00:04.643604 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:04 k8s-master kubelet[12180]: E0402 04:00:04.644545 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:04 k8s-master kubelet[12180]: E0402 04:00:04.645568 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:05 k8s-master kubelet[12180]: E0402 04:00:05.644756 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:05 k8s-master kubelet[12180]: E0402 04:00:05.645747 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:05 k8s-master kubelet[12180]: E0402 04:00:05.646843 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:06 k8s-master kubelet[12180]: E0402 04:00:06.645778 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:06 k8s-master kubelet[12180]: E0402 04:00:06.646779 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:06 k8s-master kubelet[12180]: E0402 04:00:06.647713 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:07 k8s-master kubelet[12180]: E0402 04:00:07.646802 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:07 k8s-master kubelet[12180]: E0402 04:00:07.647650 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:07 k8s-master kubelet[12180]: E0402 04:00:07.648548 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.290824 12180 event.go:209] Unable to write event: 'Patch https://192.168.1.6:6443/api/v1/namespaces/default/events/k8s-master.152 Apr 02 04:00:08 k8s-master kubelet[12180]: W0402 04:00:08.374383 12180 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.374645 12180 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.647516 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.648611 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:08 k8s-master kubelet[12180]: E0402 04:00:08.649669 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:09 k8s-master kubelet[12180]: E0402 04:00:09.648747 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:09 k8s-master kubelet[12180]: E0402 04:00:09.649597 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:09 k8s-master kubelet[12180]: E0402 04:00:09.650421 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:10 k8s-master kubelet[12180]: I0402 04:00:10.192038 12180 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach Apr 02 04:00:10 k8s-master kubelet[12180]: I0402 04:00:10.196478 12180 kubelet_node_status.go:82] Attempting to register node k8s-master Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.197073 12180 kubelet_node_status.go:106] Unable to register node “k8s-master” with API server: Post https://192.168.1.6:6443/api/ Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.649449 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.650511 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:10 k8s-master kubelet[12180]: E0402 04:00:10.651554 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:11 k8s-master kubelet[12180]: E0402 04:00:11.650552 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:11 k8s-master kubelet[12180]: E0402 04:00:11.651376 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:11 k8s-master kubelet[12180]: E0402 04:00:11.652206 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:12 k8s-master kubelet[12180]: E0402 04:00:12.651535 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:12 k8s-master kubelet[12180]: E0402 04:00:12.652445 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:12 k8s-master kubelet[12180]: E0402 04:00:12.653378 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168. Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.062758 12180 eviction_manager.go:246] eviction manager: failed to get get summary stats: failed to get node info: node "k8s-maste Apr 02 04:00:13 k8s-master kubelet[12180]: W0402 04:00:13.376514 12180 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.376782 12180 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.652653 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://192.168.1.6: Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.653439 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://192.168.1.6:644 Apr 02 04:00:13 k8s-master kubelet[12180]: E0402 04:00:13.654428 12180 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://192.168.
what is the problem?
Same issue for me on Ubuntu 16.04, Kubeadm 1.8.5, docker 1.13. But along with this error I see an error " error: failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps", when i run “journalctl -xeu kubelet”. I did run “kubeadm init --skip-preflight-checks”
Hi all!
I had the same problem, but in my case, one of my three nodes had the clock wrong when comparated with the other nodes, can you try to sync your clock from servers (with ntp for example), clean up your nodes like this:
rke remove
remove these directories /etc/kubernetes/ssl /var/lib/etcd /etc/cni /opt/cni /var/run/calico
docker system prune -af docker image prune -f
reboot your nodes
and then when your clock is ok in all servers you can install again with rke!
I have the same issue, the api server wasn’t starting. The ELB healthchecks failed while trying to connect to the port 443.
I think it is because of the networking overlay (weaver) wasn’t starting properly.
On the logs of the kublet systemd service I could see
And the port 443 was not binded.
I changed the permissions to 744 of /etc/cni/net.d/ and restart the service. So you can see that the api server is binds now to the 443.
Exactly the same was happening with the minion nodes. As soon as I changed the permissions and restart the service, they came up available in the cluster.
I use this command
then
restart kubelet and it works.
That helped a lot trying to do a raspi cluster following https://gist.github.com/alexellis/fdbc90de7691a1b9edb545c17da2d975 - Thanks !
Hi @francoran, I meant restart the kubelet via systemctl.
I figured this was not the error due to “Unable to update cni config” but rather due to not able to run with swap enabled. Updated /etc/systemd/system/kubelet.service.d/10-kubeadm.conf to have Environment=“KUBELET_EXTRA_ARGS=–fail-swap-on=false” and then restart the kubelet service and again run “kubeadm init”