minikube: minikube not starting on Windows 10 with Hyper-V

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug Report

Please provide the following details:

Environment:

Minikube version v0.23.0

  • OS Windows 10 Enterprise version 1703 build 15063.483
  • VM Driver Hyper-V
  • ISO version

What happened: minikube will not start. Sometimes it hangs at “Starting VM…” and sometimes it gives an error: PS C:\WINDOWS\system32> minikube start --vm-driver=hyperv --hyperv-virtual-switch=ExternalSwitch Starting local Kubernetes v1.8.0 cluster… Starting VM… E1109 20:44:16.687707 10552 start.go:150] Error starting host: Error starting stopped host: exit status 1.

Retrying. E1109 20:44:16.725211 10552 start.go:156] Error starting host: Error starting stopped host: exit status 1

What you expected to happen: minikube starts and the local cluster is available to deploy pods and services.

How to reproduce it (as minimally and precisely as possible): Open powershell as administrator Enter minikube start --vm-driver=hyperv --hyperv-virtual-switch=ExternalSwitch

Output of minikube logs (if applicable): F1109 21:07:39.528007 15212 logs.go:50] Error getting cluster bootstrapper: getting localkube bootstrapper: getting ss h client: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running

Anything else do we need to know: I did get minikube to start once, and was able to run a ReplicationController with a set of pods, and then it suddenly quit running for no reason. I have restarted multiple times since then, but have not been able to successfully start minikube. Hyper-V is running and I can successfully run VMs. Docker is running and I was able to create a container and use the ExternalSwitch.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 13
  • Comments: 24 (1 by maintainers)

Most upvoted comments

Quick summary for those seeing similar issues:

First create a vSwitch in Hyper-V

  • type: external
  • name: Minikube (or anything you like)
  • make sure to associate the vSwitch with the correct NIC
  • reboot after creating the vSwitch just in case (routing tables, …)
  • go take a look at the network devices on your Windows host to make sure that you have a Virtual Ethernet Adapter configured and actually connected to your network
    • Control Panel\Network and Internet\Network Connections. In my case i have: “vEthernet (Minikube)”

Once done make sure to start clean, remove ~/.minikube and remove any VM you tried to create before.

Once you’re ready, make it happen:

minikube start --vm-driver "hyperv" --hyperv-virtual-switch "Minikube" --disk-size 10g --memory 4096 --v 9999 --alsologtostderr

I had the same behavior, but due to 2 reasons:

  1. First, I did not have enough disk space on drive C: for the HyperV machine (<1.75GB left). Even if my HyperV is set to store the VMs on D:, the minikube scripts are still using drive C: regardless of that setting. Didn’t dig too much into this, just freed some space.
  2. The HyperV virtual switch was set to Internal. After I set it to external, everything worked. So, please create a virtual switch in HyperV, set it to External, name it “minikube-switch” or whatever else, and use “minikube start --vm-driver=hyperv --hyperv-virtual-switch=minikube-switch”

Now I am experiencing some CPU overusage by minikube, but that’s a different topic 😃 So it’s still fun 😃 I hope this helps.

Had the same issue, fixed by selecting the correct NIC, purging all caches (basically rm ~/.minikube/) and then running minikube start again, with the correct flags. Thank you @dsebastien !

Had the same issue, fixed by not setting the virtual switch.

Here, the cluster will not get external IP, because the DHCP server only provides addresses to registered devices. So the external switch is actually guaranteed to break everything.

I have tried @dsebastien suggestions but did not work on my case. I’m still running into issues with “crio”.

Using SSH client type: native &{{{<nil> 0 [] [] []} docker [0x83feb0] 0x83fe60 [] 0s} fe80::215:5dff:fec8:6412 22 <nil> <nil>} About to run SSH command: sudo systemctl -f restart crio SSH cmd err, output: Process exited with status 1: Job for crio.service failed because the control process exited with e rror code. See “systemctl status crio.service” and “journalctl -xe” for details.

Error setting container-runtime options during provisioning ssh command error: command : sudo systemctl -f restart crio err : Process exited with status 1 output : Job for crio.service failed because the control process exited with error code. See “systemctl status crio.service” and “journalctl -xe” for details.

E1211 15:33:45.206964 6120 start.go:150] Error starting host: Error creating host: Error executing step: Provisioning VM. : ssh command error: command : sudo systemctl -f restart crio err : Process exited with status 1 output : Job for crio.service failed because the control process exited with error code. See “systemctl status crio.service” and “journalctl -xe” for details. .

Retrying. E1211 15:33:45.227177 6120 start.go:156] Error starting host: Error creating host: Error executing step: Provisionin g VM. : ssh command error: command : sudo systemctl -f restart crio err : Process exited with status 1 output : Job for crio.service failed because the control process exited with error code. See “systemctl status crio.service” and “journalctl -xe” for details.

minikube version: v0.24.1

Any other suggestions? I have tried almost everything I could find on the net.

Thank you

@dsebastien Thanks the detailed fix -I was able to get mine running after reading the post above. Clearing out the ./minikube was the kicker for me.