minikube: vmwarefusion: failed to start after stop: Error configuring auth on host: Too many retries waiting for SSH to be available
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Minikube version (use minikube version): 0.18.0
Environment:
- OS (e.g. from /etc/os-release): MacOS 10.12.4
- VM Driver (e.g.
cat ~/.minikube/machines/minikube/config.json | grep DriverName): vmwarefusion - ISO version (e.g.
cat ~/.minikube/machines/minikube/config.json | grep -i ISOorminikube ssh cat /etc/VERSION): boot2docker.iso - Install tools:
- Others:
What happened:
Using Vmware Fusion in Mac OS, the first time minikube is started, it works flawlessly. However, after minikube stop, if I run again minikube start --vm-driver=vmwarefusion, it will fail and never run the minikube.
Starting local Kubernetes cluster...
Starting VM...
Waiting for SSH to be available...
E0419 23:27:50.099029 1781 start.go:116] Error starting host: Temporary Error: Error configuring auth on host: Too many retries waiting for SSH to be available. Last error: Maximum number of retries (60) exceeded.
What you expected to happen: Be able to start the cluster after stopping it.
How to reproduce it (as minimally and precisely as possible):
minikube start --vm-driver=vmwarefusion
minikube stop
minikube start --vm-driver=vmwarefusion
Anything else do we need to know:
The only solution I’ve found so far is to minikube delete and start over.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 18 (1 by maintainers)
using latest v0.23.0 and still getting the same issue, is the fix included in that version?
is there any nightly build to test it?
the easiest way of fixing it is just
ssh-copy-id -i ~/.minikube/machines/minikube/id_rsa.pub docker@$(minikube ip)while minikube is starting, the password is herecat ~/.minikube/machines/minikube/config.json|grep -i passThis commit seems to be a fix for the issue (minikube itself has no code dictating when userdata is copied.) Can we pull it in to minikube?
Thanks. After making a fresh cluster I put the tar file in by hand.
Experiencing same issue here.
Did some digging with vmrun and found that guest /home/docker/.ssh dir is missing.
As a workaround I found I could get the cluster running again by:
minikube start -v 10(get it to start the vm for you, [ctrl]+[c} once you start to see the 255 errors)Then running this script on host to restore missing ssh keys in guest:
Then running start again now that ssh access is restored to bring it up:
minikube start -v 10Did a some quick digging for a cause, found this in minikube-automount logs, minikube-automount restores userdata.tar to populate the /home/docker/.ssh dir and so without that we get the 255 error from the client ssh
/var/lib/boot2docker points onto persistent storage, so that is good:
But there is no userdata.tar contained within.
Yet to find out why userdata.tar is missing… But looks to be handled here: https://github.com/kubernetes/minikube/blob/k8s-v1.7/deploy/iso/minikube-iso/package/automount/minikube-automount
So I’m thinking the logs from the guest on first boot (
journalctl -t minikube-automount) might show us the problem… will try to grab when I can.