kubernetes: kubeadm init hangs on ubuntu 16.04

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): created API client, waiting for the control plane to become ready

Related to or a similar discussion happened @ https://github.com/kubernetes/kubernetes/issues/33544

** BUG REPORT ** (choose one):

Kubernetes version (use kubectl version): kubectl version Client Version: version.Info{Major:“1”, Minor:“4”, GitVersion:“v1.4.0”, GitCommit:“a16c0a7f71a6f93c7e0f222d961f4675cd97a46b”, GitTreeState:“clean”, BuildDate:“2016-09-26T18:16:57Z”, GoVersion:“go1.6.3”, Compiler:“gc”, Platform:“linux/amd64”}

kubeadm version: version.Info{Major:“1”, Minor:“5+”, GitVersion:“v1.5.0-alpha.0.1534+cf7301f16c0363-dirty”, GitCommit:“cf7301f16c036363c4fdcb5d4d0c867720214598”, GitTreeState:“dirty”, BuildDate:“2016-09-27T18:10:39Z”, GoVersion:“go1.6.3”, Compiler:“gc”, Platform:“linux/amd64”}

Environment:

  • Cloud provider or hardware configuration: Virtual box, vagrant 1.8.1, bento/ubuntu-16.04, 1.5GB RAM, 1 CPU
  • OS (e.g. from /etc/os-release): Distributor ID: Ubuntu Description: Ubuntu 16.04.1 LTS Release: 16.04 Codename: xenial
  • Kernel (e.g. uname -a): Linux vagrant 4.4.0-38-generic #57-Ubuntu SMP Tue Sep 6 15:42:33 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:

What happened: As I try to run kubeadm init, it hangs with root@vagrant:~# kubeadm init

<master/tokens> generated token: "eca953.0642ac0fa7fc6378"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready

What you expected to happen: The command should have succeeded thereby downloading and installing the cluster database and “control plane” components

How to reproduce it (as minimally and precisely as possible): Download and install docker on Ubuntu 16.04 by following https://docs.docker.com/engine/installation/linux/ubuntulinux/

Follow http://kubernetes.io/docs/getting-started-guides/kubeadm/ to install kubernete

Anything else do we need to know:

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 31
  • Comments: 72 (20 by maintainers)

Most upvoted comments

@oz123 @Miyurz I managed to fix the issue after reading this page : https://docs.docker.com/engine/admin/systemd/#/http-proxy

I added the proxy configuration in in the Docker systemd service file and it works (approximately 20 sec to start the master with # kubeadm init).

@oz123 I have the same issue here. It doesn’t seems to be due to slow connection (i’ve been waiting for 30 min and I have a very fast internet connection in my office too). Nothing relevant found into the logs.

@czerwina sorry! this command (–use-kubernetes-version v1.4.1 ) didn’t change anything on my system, still blocks waiting for the control plane to become ready 😦

I’ve faced the same issue here in my Ubuntu 16.04, in my case the problem was:

  • kubelet service was not running
  • I had a previous service (personal web server) running in TCP port 8080 (discovery use the same port but there is no error!)

fixed “once” with:

  • sudo service kubelet stop
  • stop my personal web server on 8080

  • sudo service kubelet start
  • kubeadm init

after that kubeadm worked properly

note: found the process above may fail sometimes. Looks like a sync problem between kubeadm init start and kubelet do it work. Kubeadm fails with:

error: <master/discovery> failed to create "kube-discovery" deployment [deployments.extensions "kube-discovery" already exists]

If anyone has been able to overcome this, could you please share what the problem was? Starting with a shiny-new single Ubuntu 16.04 KVM and am following the official kubadm tutorial, but am facing this just like others: [apiclient] Created API client, waiting for the control plane to become ready

Hi Folk, problem solved for me just by: Stopping Apparmor : # /etc/init.d/apparmor stop after that, you shoud reset kubeadm # kubeadm reset and finally, rerun the Initialization of your master # kubeadm init

@luxas :

If nslookup “localhost.$(hostname -d)” is resolving an ip different than nslookup “$(hostname)” you will reproduce the issue.

You will have this scenario, for example, on VPS environments.

I’m also facing the same problem Centos 7.3 ~ # ❯❯❯ kubeadm init [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.6.0 [init] Using Authorization mode: RBAC [preflight] Running pre-flight checks [preflight] Starting the kubelet service [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [ip-172-23-12-94.ap-south-1.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.23.12.94] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf” [apiclient] Created API client, waiting for the control plane to become ready

Installation just doesn’t move beyond this!

Same problem with:

Cent OS 7 docker 1.12.6 kubernetes 1.5.3

For me it failed first time, but then I’ve booted fresh ubuntu 16.04, and did:

kubeadm init --use-kubernetes-version v1.4.1 --api-advertise-addresses x.x.x.x

Second time it succeeded.

@eparhei is docker working correctly on your system? I had to add the following to the boot cmdline for docker to start after which all the images downloaded and started

cgroup_enable=memory cgroup_enable=cpustats

@errordeveloper , what kind of information can help you? I have Ubuntu 16.04 installed. It’s behind a proxy but the proxy is set as mentioned in https://docs.docker.com/engine/admin/systemd/#/http-proxy Then, kubeadm init blocks while “waiting for the control plane to become ready”. However, if in a second terminal, I run $ kubectl get pods --all-namespaces This is the output: NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-ubuntu-199 1/1 Running 0 1m kube-system kube-apiserver-ubuntu-199 1/1 Running 0 2m kube-system kube-controller-manager-ubuntu-199 1/1 Running 0 2m kube-system kube-scheduler-ubuntu-199 1/1 Running 0 1m

Also, I can create the network pod and after a while I have: $ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-ubuntu-199 1/1 Running 0 8m kube-system kube-apiserver-ubuntu-199 1/1 Running 0 8m kube-system kube-controller-manager-ubuntu-199 1/1 Running 0 8m kube-system kube-scheduler-ubuntu-199 1/1 Running 0 8m kube-system weave-net-gornp 2/2 Running 0 1m

Plus, $ kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {“health”: “true”}

Is there a way to by pass this step and reach to adding node etc…? If you need any other information please indicate what you need to know exactly so that I can help you with the output. Thanks!

V1.4.1 fixes all the startup issues on ubuntu sInce we started experimenting with kubeadm

I cured my hung kubeadm init on debian stretch, docker 1.10.3 by enabling cgroup_enable=memory on the kernel boot cmdline.

The Following Cgroup subsystem not mounted: [memory] logs were hard to notice.

Oct 02 18:55:44 kpkvmk8s01laptop kubelet[3213]: I1002 18:55:44.258823    3213 kubelet.go:2240] skipping pod synchronization - [Failed to start ContainerManager system validation failed - Following Cgroup subsystem not mounted: [memory] network state unknown container runtime is down]
Oct 02 18:55:44 kpkvmk8s01laptop kubelet[3213]: I1002 18:55:44.259016    3213 server.go:608] Event(api.ObjectReference{Kind:"Node", Namespace:"", Name:"kpkvmk8s01laptop", UID:"kpkvmk8s01laptop", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'KubeletSetupFailed' Failed to start ContainerManager system validation failed - Following Cgroup subsystem not mounted: [memory]