kubernetes: v1.14.0 problem : It seems like the kubelet isn't running or healthy.
kubernetes v1.14.0 when use kubeadm init , see the below error
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
what can I do for resolve this problem , when I run systemctl status kubelet, here is the infomation:
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Thu 2019-03-28 09:44:39 CST; 2s ago
Docs: https://kubernetes.io/docs/
Process: 14476 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 14476 (code=exited, status=255)
Mar 28 09:44:39 k8s-node systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Mar 28 09:44:39 k8s-node systemd[1]: Unit kubelet.service entered failed state.
Mar 28 09:44:39 k8s-node systemd[1]: kubelet.service failed.
[root@VM_0_12_centos ~]# /sig cluster-lifecycle
-bash: /sig: No such file or directory
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 10
- Comments: 25 (5 by maintainers)
Anyone who will come across this error later, here is the solution… At least I know it worked for me
Create this file “daemon.json” in the directory “/etc/docker” and add the following
{ “exec-opts”: [“native.cgroupdriver=systemd”] }
Restart your docker service: systemctl restart docker
Reset kubeadm initializations: kubeadm reset
Initialize your network: kubeadm init …
Hello guys, I got same error. So I try this solution and work for me
Create a file called “daemon.json” inside “/etc/docker/” directory and paste the following script { “exec-opts”: [“native.cgroupdriver=systemd”] }
Then reload the daemon and restart both docker and kubelet service
Finally I try to init again with –v=latest, so it can be like sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=latest****
Thank you
I got same error. Did you solve the error? If you have solution, please share the solution.
Please check if the configuration of docker’s native.cgroupdriver and kubetlet is consistent. You can view the configuration of the kubelet by
cat /var/lib/kubelet/kubeadm-flags.envandcat /etc/default/kubelet. Check out the docker configuration withdocker info. If you use docker.Saved my whole day!
According to kubeadm init shows kubelet isn’t running or healthy:
sudo swapoff -asudo sed -i '/ swap / s/^/#/' /etc/fstabthen reboot the machine.I also faced this issue after I screwed my setup with minikube.deb
I sorted out when I reinstalled kubeadm kubelet kubectl from apt repo …