minikube: restart: waiting for k8s-app=kube-proxy: timed out waiting for the condition

When i minikube start, i get error

boby@sok-01:~$ minikube start
😄  minikube v0.35.0 on linux (amd64)
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing virtualbox VM for "minikube" ...
⌛  Waiting for SSH access ...
📶  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
🔄  Relaunching Kubernetes v1.13.4 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy💣  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new
boby@sok-01:~$ 

I am using Ubuntu 18.04

Attached minikube logs

minikube-logs.txt

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 20
  • Comments: 41 (10 by maintainers)

Most upvoted comments

I did minikube delete and started it again and it worked. It did sit on Waiting for pods for a few minutes but did get past it (unlike before, I left it running for a long time without any progress). HTH. I am also running 0.35.0.

So! I finally ran into this bug myself in a repeatable fashion. The good news is that this is solvable! I can verify that #4014 fixes this by running within the VM:

kubeadm init phase addon all

Even on older Kubernetes releases. I’ll make an effort to get this bug resolved this week in case that PR isn’t merged. In the mean time, most everyone should be able to workaround this bug by running:

minikube delete

Got it work after minikube delete and upgrading to v1…0.0

I’ve hit the same issue after my first attempt failed (because I needed to set docker proxy settings). Second and consequent attempts would fail.

I added the minikube ip address to my NO_PROXY environment variable, and after a delete it would work.

Minikube 0.35.0, Ubuntu 18.04, amd64, virtualbox (set to 16gb/50000mb).

I have the same issue here… only difference is I am running on Ubuntu 18.10.

nonrootuser $ minikube start
😄  minikube v0.35.0 on linux (amd64)
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🏃  Re-using the currently running virtualbox VM for "minikube" ...
⌛  Waiting for SSH access ...
📶  "minikube" IP address is 192.168.99.100
🐳  Configuring Docker as the container runtime ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.13.4 ...
🔄  Relaunching Kubernetes v1.13.4 using kubeadm ... 
⌛  Waiting for pods: apiserver proxy💣  Error restarting cluster: wait: waiting for k8s-app=kube-proxy: timed out waiting for the condition

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new

minikube-logs.txt

@serverok please close this issue if this works for you.

minicube delete and minikube start worked. minikube delete 🔥 Deleting “minikube” from virtualbox … 💔 The “minikube” cluster has been deleted.

minikube start 😄 minikube v1.0.0 on darwin (amd64) 🤹 Downloading Kubernetes v1.14.0 images in the background … 🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) … 📶 “minikube” IP address is 192.168.99.101 🐳 Configuring Docker as the container runtime … 🐳 Version of container runtime is 18.06.2-ce ⌛ Waiting for image downloads to complete … ✨ Preparing Kubernetes environment … 🚜 Pulling images required by Kubernetes v1.14.0 … 🚀 Launching Kubernetes v1.14.0 using kubeadm … ⌛ Waiting for pods: apiserver proxy etcd scheduler controller dns 🔑 Configuring cluster permissions … 🤔 Verifying component health … 💗 kubectl is now configured to use “minikube” 🏄 Done! Thank you for using minikube!

Even I have the same problem on mac after minikube upgrade…any idea on what might be causing it.

bash-5.0$ minikube start 😄 minikube v0.35.0 on darwin (amd64) 🔥 Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) … 📶 “minikube” IP address is 192.168.99.100 🐳 Configuring Docker as the container runtime … ✨ Preparing Kubernetes environment … 🚜 Pulling images required by Kubernetes v1.13.4 … 🚀 Launching Kubernetes v1.13.4 using kubeadm … ⌛ Waiting for pods: apiserver💣 Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: 👉 https://github.com/kubernetes/minikube/issues/new

bash-5.0$ minikube delete 🔥 Deleting “minikube” from virtualbox … 💔 The “minikube” cluster has been deleted.

bash-5.0$ minikube start 😄 minikube v0.35.0 on darwin (amd64) 🔥 Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) … 📶 “minikube” IP address is 192.168.99.102 🐳 Configuring Docker as the container runtime … ✨ Preparing Kubernetes environment … 🚜 Pulling images required by Kubernetes v1.13.4 … 🚀 Launching Kubernetes v1.13.4 using kubeadm … ⌛ Waiting for pods: apiserver💣 Error starting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: 👉 https://github.com/kubernetes/minikube/issues/new

The “init” command executes the following phases:

preflight                  Run master pre-flight checks
kubelet-start              Writes kubelet settings and (re)starts the kubelet
certs                      Certificate generation
  /ca                        Generates the self-signed Kubernetes CA to provision identities for other Kubernetes components
  /apiserver                 Generates the certificate for serving the Kubernetes API
  /apiserver-kubelet-client  Generates the Client certificate for the API server to connect to kubelet
  /etcd-ca                   Generates the self-signed CA to provision identities for etcd
  /etcd-server               Generates the certificate for serving etcd
  /etcd-peer                 Generates the credentials for etcd nodes to communicate with each other
  /etcd-healthcheck-client   Generates the client certificate for liveness probes to healtcheck etcd
  /apiserver-etcd-client     Generates the client apiserver uses to access etcd
  /front-proxy-ca            Generates the self-signed CA to provision identities for front proxy
  /front-proxy-client        Generates the client for the front proxy
  /sa                        Generates a private key for signing service account tokens along with its public key
kubeconfig                 Generates all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  /admin                     Generates a kubeconfig file for the admin to use and for kubeadm itself
  /kubelet                   Generates a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
  /controller-manager        Generates a kubeconfig file for the controller manager to use
  /scheduler                 Generates a kubeconfig file for the scheduler to use
control-plane              Generates all static Pod manifest files necessary to establish the control plane
  /apiserver                 Generates the kube-apiserver static Pod manifest
  /controller-manager        Generates the kube-controller-manager static Pod manifest
  /scheduler                 Generates the kube-scheduler static Pod manifest
etcd                       Generates static Pod manifest file for local etcd.
  /local                     Generates the static Pod manifest file for a local, single-node local etcd instance.
upload-config              Uploads the kubeadm and kubelet configuration to a ConfigMap
  /kubeadm                   Uploads the kubeadm ClusterConfiguration to a ConfigMap
  /kubelet                   Uploads the kubelet component config to a ConfigMap
mark-control-plane         Mark a node as a control-plane
bootstrap-token            Generates bootstrap tokens used to join a node to a cluster
addon                      Installs required addons for passing Conformance tests
  /coredns                   Installs the CoreDNS addon to a Kubernetes cluster
  /kube-proxy                Installs the kube-proxy addon to a Kubernetes cluster

Or it’s because kube-proxy is moved by kubeadm

@serverok - I’ve seen this to when resuming a previously setup VM, but haven’t been able to replicate it reliably. Do you mind attaching the output of the following command:

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-proxy --format={{.ID}})'

I suspect this can be resolved by running minikube delete, but it’s almost certainly going to come back at some random point in the future.

For other folks who are also running into this in a way that does not say “Error restarting”, I suggest opening a new bug report as there are likely to be multiple causes. Feel free to reference #3843 though.

Thanks!

Update: Fixed command-line.

I encountered the same problem after moved to v0.35.0 to fix the other problem.

😄 minikube v0.35.0 on darwin (amd64) 👍 minikube will upgrade the local cluster from Kubernetes 1.13.3 to 1.13.4 💡 Tip: Use ‘minikube start -p <name>’ to create a new cluster, or ‘minikube delete’ to delete this one. 🏃 Re-using the currently running virtualbox VM for “minikube” … ⌛ Waiting for SSH access … 📶 “minikube” IP address is 192.168.99.100 🐳 Configuring Docker as the container runtime … ✨ Preparing Kubernetes environment … 💾 Downloading kubeadm v1.13.4 💾 Downloading kubelet v1.13.4 🚜 Pulling images required by Kubernetes v1.13.4 … 🔄 Relaunching Kubernetes v1.13.4 using kubeadm … ⌛ Waiting for pods: apiserver

💣  Error restarting cluster: wait: waiting for component=kube-apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: 👉 https://github.com/kubernetes/minikube/issues/new

I am on v10.14.2