minikube: none: reusing node: detecting provisioner: Too many retries waiting for SSH to be available

Environment:

minikube version: v1.0.0 OS: Ubuntu 16.04 LTS (Xenial Xerus) VM Driver: none

What happened: ``` Created a VM with none driver, stopped it, then started it again. The VM failed to start and minikube reported that it crashed.


What I expected to happen: 
the VM created by the first minikube start command is started.

Output from the second minikube start command:

😄  minikube v1.0.0 on linux (amd64)
🤹  Downloading Kubernetes v1.14.0 images in the background ...
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing none VM for "minikube" ...
⌛  Waiting for SSH access ...

💣  Unable to start VM: detecting provisioner: Too many retries waiting for SSH to be available.  Last error: Maximum number of  
retries (60) exceeded

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new


Output from 'sudo minikube start --alsologtostderr -v=8 --vm-driver=none':

⌛  Waiting for SSH access ...
Waiting for SSH to be available...
Getting to WaitForSSH function...
Error getting ssh command 'exit 0' : driver does not support ssh commands

To reproduce: sudo minikube start --vm-driver=none sudo minikube stop sudo minikube start --vm-driver=none

Starting a stopped VM was working in minikube v0.28.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 32 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Questioning: ‘We test this sequence’ func TestStartStop in test/integration/start_stop_delete_test.go has the code

if !strings.Contains(test.name, "docker") && usingNoneDriver(r) { t.Skipf("skipping %s - incompatible with none driver", test.name) }

The test names in the func are: nocache_oldest, feature_gates_newest_cni, containerd_and_non_default_apiserver_port, crio_ignore_preflights. None contain ‘docker’. This seems to indicate that no startStop tests with noneDriver are performed.

Addressing: ‘why is it SSHing to itself’ Sequence of code execution, as indicated by the output:

  1. The func startHost in pkg/minikube/cluster/cluster.go logs “Restarting existing VM”
  2. The func startHost logs “Waiting for SSH access”
  3. The func startHost calls provision.DetectProvisioner
  4. The func DetectProvisioner in minikube/vendor/github.com/docker/machine/libmachine/provision/provisioner.go logs the line “Waiting for SSH to be available”
  5. The func DetectProvisioner invokes drivers.WaitForSSH
  6. The func WaitForSSH in minikube/vendor/github.com/docker/machine/libmachine/drivers/utils.go calls calls WaitFor with sshAvailableFunc
  7. The func sshAvailableFunc in minikube/vendor/github.com/docker/machine/libmachine/drivers/utils.go logs “Getting to WaitForSSH function”
  8. The func sshAvailableFunc calls RunSSHCommandFromDriver
  9. The func RunSSHCommandFromDriver in minikube/pkg/drivers/none/none returns fmt.Errorf(“driver does not support ssh commands”)
  10. The func sshAvailableFunc logs “Error getting ssh command ‘exit 0’ : %s”
  11. The func WaitFor returns “Maximum number of retries (%d) exceeded”
  12. The func WaitForSSH logs “Too many retries waiting for SSH to be available. Last error: %s”

The pull request for #3387 added the DetectProvisioner invocation into startHost. DetectProvisioner runs SSH commands. The none driver doesn’t support SSH commands.