minikube: none: kubelet failed to start -> apiserver process never appeared

minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
🀹  Running on localhost (CPUs=2, Memory=11001MB, Disk=51192MB) ...
ℹ️   OS release is Ubuntu 18.04.3 LTS
🐳  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
    β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
πŸ’Ύ  Downloading kubelet v1.16.0
πŸ’Ύ  Downloading kubeadm v1.16.0
🚜  Pulling images ...
πŸš€  Launching Kubernetes ... 

πŸ’£  Error starting cluster: cmd failed: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap

: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
 output: [init] Using Kubernetes version: v1.16.0
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube": lookup minikube on 127.0.0.53:53: server misbehaving
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [minikube localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
	- 'docker ps -a | grep kube | grep -v pause'
	Once you have found the failing container, you can inspect its logs with:
	- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
: running command: sudo env PATH=/var/lib/minikube/binaries/v1.16.0:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap
.: exit status 1

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
root@qlikadmin-VirtualBox:~# kubectl cluster-info dump
The connection to the server 10.0.2.15:8443 was refused - did you specify the right host or port?

then:

root@qlikadmin-VirtualBox:~# sudo minikube start --vm-driver=none --kubernetes-version=v1.15.00
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’£  Unable to parse "v1.15.00": Patch number must not contain leading zeroes "00"
root@qlikadmin-VirtualBox:~# minikube start --vm-driver=none --kubernetes-version=v1.15.0
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’£  Error: You have selected Kubernetes v1.15.0, but the existing cluster for your profile is running Kubernetes v1.16.0. Non-destructive downgrades are not supported, but you can proceed by performing one of the following options:

* Recreate the cluster using Kubernetes v1.15.0: Run "minikube delete ", then "minikube start  --kubernetes-version=1.15.0"
* Create a second cluster with Kubernetes v1.15.0: Run "minikube start -p <new name> --kubernetes-version=1.15.0"
* Reuse the existing cluster with Kubernetes v1.16.0 or newer: Run "minikube start  --kubernetes-version=1.16.0"
root@qlikadmin-VirtualBox:~# sudo minikube start --vm-driver=none --kubernetes-version=v1.15
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’£  Unable to parse "v1.15": No Major.Minor.Patch elements found
root@qlikadmin-VirtualBox:~# minikube start --vm-driver=none
πŸ˜„  minikube v1.4.0 on Ubuntu 18.04 (vbox/amd64)
πŸ’‘  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
πŸ”„  Starting existing none VM for "minikube" ...
βŒ›  Waiting for the host to be provisioned ...
🐳  Preparing Kubernetes v1.16.0 on Docker 18.09.9 ...
    β–ͺ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
πŸ”„  Relaunching Kubernetes using kubeadm ... 

πŸ’£  Error restarting cluster: waiting for apiserver: apiserver process never appeared

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 26 (7 by maintainers)

Most upvoted comments

I ran minikube delete and then minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube v switch" and things started just fine.

Deleting the cluster and recreating worked for me. (minikube delete)

I ran minikube delete and then minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube v switch" and things started just fine.

I run the command minikube delete resloved problem!