kubernetes: Kubeadm init blocks at "Created API client" forever

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

yes

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Kubernetes version (use kubectl version):

# kubeadm version
kubeadm version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.0.2074+a092d8e0f95f52", GitCommit:"a092d8e0f95f5200f7ae2cba45c75ab42da36537", GitTreeState:"clean", BuildDate:"2016-12-13T17:03:18Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: DigitalOcean

  • OS (e.g. from /etc/os-release): centos7.3.16

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel (e.g. uname -a): Linux centos-2gb-nyc3-01 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: yum install -y docker kubelet kubeadm kubectl kubernetes-cni

  • Others:

What happened: do kubeadm init

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.6.0
[tokens] Generated token: "a93dbd.b03fc0668e8874f8"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready

blocks here forever.

What you expected to happen: finish kubeadm init

How to reproduce it (as minimally and precisely as possible):

start a new droplet in DigitalOcean, do

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
        https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

then run kubeadm init

Anything else we need to know:

no http_proxy set.

[root@centos-2gb-nyc3-01 ~]# env
XDG_SESSION_ID=1
HOSTNAME=centos-2gb-nyc3-01
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=124.65.241.146 59390 22
SSH_TTY=/dev/pts/0
USER=root
MAIL=/var/spool/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/.local/bin:/root/bin
PWD=/root
LANG=en_US.UTF-8
HISTCONTROL=ignoredups
SHLVL=1
HOME=/root
LOGNAME=root
SSH_CONNECTION=124.65.241.146 59390 138.197.70.237 22
LC_CTYPE=UTF-8
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/0
_=/usr/bin/env

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 6
  • Comments: 38 (5 by maintainers)

Commits related to this issue

Most upvoted comments

I am seeing the same thing on 1.7 as well.

[root@ip-172-31-41-99 centos]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [ip-172-31-41-99 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.31.41.99]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

@luxas I know this is not stackoverflow or a forum page. This is github where developers try to improve the product. With this aspect, you are right. But I saw that people are still living the effects of the issue and still commenting on this closed issue. Saying that, I am not the only one who comments on a closed issue. Still nothing right, but I am seeking for a solution. Trying to share my experiences. Since the interest of community is on these pages instead of stackoverflow and troubleshooting page I am desperate on those platforms. You can call this as mob mentality if you want.

Is this issue really closed? Am I the only one who is having issues when installing Kubernetes on CentOS 7.3?

Finally, can you guide me where I need to seek for a solution? Maybe I don’t know the right place.

PS: However big the issue I am living it doesn’t mean that I don’t respect the job you’re doing.

Seeing the same with 1.7.1

same problem on 1.7.3

restart kubelet works to me

systemctl restart kubelet

I had the same issue, resolved by completely stopping kubelet service -> systemctl stop kubelet && systemctl status kubelet for some reason kubelet directory is not removed -> rm -rf /var/lib/kubelet open another tab to look for logs: -> tail -f /var/log/messages also look for: -> kubelet logs -> kubeadm init --apiserver-advertise-address=<ip address> --pod-network-cidr 10.244.0.0/16

I added --cgroup-driver=system to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, and systemctl daemon-reload && systemctl restart kubelet . Still not work.

@sedatkestepe If you open a new issue in kubernetes/kubeadm, there is a chance one of the kubeadm devs will look at it. Basically, most devs don’t watch kubernetes/kubernetes so it’s extremely hard to notice any comment here due to the massive notification load.

So please open a new issue in that repo instead. Also, a lot of this will improve in v1.8 The underlying problem when this happens is pretty much always that the kubelet is hanging in some way. There are a lot ways the kubelet can hang, due to the very large matrix of different os/kernel/etc versions, enabled cgroups or swap etc.

Thanks

getting the same issue

kubeadm init --skip-preflight-checks [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.7.4 [init] Using Authorization modes: [Node RBAC] [preflight] Skipping pre-flight checks [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Using the existing CA certificate and key. [certificates] Using the existing API Server certificate and key. [certificates] Using the existing API Server kubelet client certificate and key. [certificates] Using the existing service account token signing key. [certificates] Using the existing front-proxy CA certificate and key. [certificates] Using the existing front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/scheduler.conf” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/admin.conf” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/kubelet.conf” [kubeconfig] Using existing up-to-date KubeConfig file: “/etc/kubernetes/controller-manager.conf” [apiclient] Created API client, waiting for the control plane to become ready

I don’t want to repeat. I already gave a thumbs up to @praving55 but I’ll add one detail for the status of the service:

[root@bigdev1 ~]# systemctl status kubelet ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since Thu 2017-08-17 18:53:44 +03; 6s ago Docs: http://kubernetes.io/docs/ Process: 8795 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=1/FAILURE) Main PID: 8795 (code=exited, status=1/FAILURE)

Aug 17 18:53:44 bigdev1 systemd[1]: Unit kubelet.service entered failed state. Aug 17 18:53:44 bigdev1 systemd[1]: kubelet.service failed.

Any solution or workaround appreciated. @ybbaigo restart did not work for me. 1.7.3-1

I am having this problem on Ubuntu/Debian/Hypriot. Never gets past “Created API client” and kube-apiserver container restarts about every other minute.

kubeadm version: &version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.1", GitCommit:"1dc5c66f5dd61da08412a74221ecc79208c2165b", GitTreeState:"clean", BuildDate:"2017-07-14T01:48:01Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/arm"}
$ docker version
Client:
 Version:      17.05.0-ce
 API version:  1.29
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:30:54 2017
 OS/Arch:      linux/arm

Server:
 Version:      17.05.0-ce
 API version:  1.29 (minimum version 1.12)
 Go version:   go1.7.5
 Git commit:   89658be
 Built:        Thu May  4 22:30:54 2017
 OS/Arch:      linux/arm
 Experimental: false

Things was solved by setting --fail-swap-on=false in kubelet. Just make the modification on the file /etc/systemd/system/kubelet.service.d/10-kubeadm.conf.

Using the latest kubeadm v1.8.0 ,what’s wrong with it? Anybody have encountered this issue?

[root@swarm ~]# kubeadm init --apiserver-advertise-address=192.168.1.150 --pod-network-cidr=192.168.0.0/16 --skip-preflight-checks [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.8.0 [init] Using Authorization modes: [Node RBAC] [preflight] Skipping pre-flight checks [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [swarm.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.150] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Wrote KubeConfig file to disk: “admin.conf” [kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf” [kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf” [kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf” [controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml” [controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml” [controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml” [etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml” [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests” [init] This often takes around a minute; or longer if the control plane images have to be pulled. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz’ failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz’ failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz’ failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz/syncloop’ failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz/syncloop’ failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz/syncloop’ failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp [::1]:10255: getsockopt: connection refused. [kubelet-check] It seems like the kubelet isn’t running or healthy. [kubelet-check] The HTTP call equal to ‘curl -sSL http://localhost:10255/healthz’ failed with error: Get http://localhost:10255/healthz: dial tcp [::1]:10255: getsockopt: connection refused.

@sree2005 I was getting the pre-flight warning that my machine IP was getting accessed through the proxy server:

[preflight] WARNING: Connection to “https://<machine IP>” uses proxy “http://<Proxy IP>”. If that is not intended, adjust your proxy settings

I solved it by adding <machine IP> to the no_proxy environment variable so that it is not accessed through the proxy, which you can generally do with the following command in your bash shell:

export no_proxy=<machine IP>;$no_proxy 
kubeadm init

After changing the env variable I stopped getting the pre-flight warning and kubeadm started successfully. Some people might be facing the same issue. Hope this helps.

@kubernetes/sig-cluster-lifecycle-misc

Seeing the same issue:

Vagrantfile:

Vagrant.configure('2') do |config|

  config.vm.provider "virtualbox" do |v|
    v.memory = 2048
  end

  config.vm.box = "bento/centos-7.3"

  config.vm.network "private_network", type: "dhcp"
  config.vm.network "forwarded_port", guest: 10250, host: 10250, auto_correct: true
  config.vm.network "forwarded_port", guest: 10255, host: 10255, auto_correct: true

  repo_config = "kubernetes-el7.repo"

  config.vm.provision "Create Repository Configuration File", type: "file", source: Dir.getwd + "/#{repo_config}", destination: "/tmp/#{repo_config}"
  config.vm.provision "Move Repository Configuration File", type: "shell", inline: "mv /tmp/#{repo_config} /etc/yum.repos.d/#{repo_config}"
  config.vm.provision "Set Repo Configuration Permissions", type: "shell", inline: "chown root:root  /etc/yum.repos.d/#{repo_config}"
  config.vm.provision "Install Dependencies", type: "shell", inline: "yum install -y yum-utils device-mapper-persistent-data lvm2"
  config.vm.provision "Install Docker Repository", type: "shell", inline: "yum-config-manager -y --add-repo https://download.docker.com/linux/centos/docker-ce.repo && yum makecache fast"
  config.vm.provision "Set selinux to permissive", type: "shell", inline: "setenforce 0"
  config.vm.provision "Install Needed Packages", type: "shell", inline: "yum install -y kubelet kubeadm docker-ce && systemctl enable docker && systemctl restart docker && systemctl enable kubelet && systemctl start kubelet"

  config.vm.define 'manager1', primary: true do |m|
    m.vm.network "forwarded_port", guest: 6443, host: 6443, auto_correct: true
    m.vm.network "forwarded_port", guest: 2379, host: 2379, auto_correct: true
    m.vm.network "forwarded_port", guest: 2380, host: 2380, auto_correct: true
    m.vm.network "forwarded_port", guest: 10251, host: 10251, auto_correct: true
    m.vm.network "forwarded_port", guest: 10252, host: 10252, auto_correct: true
    m.vm.provision "Kubadm preflight fix", type: "shell", inline: "echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables"
  end

  config.vm.define 'worker1' do |w1|
    for i in 30000..32767
        w1.vm.network :forwarded_port, guest: i, host: i, auto_correct: true
    end
  end

end

Referenced file kubernetes-el7.repo:

[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

$ vagrant ssh manager1
[vagrant@localhost ~]$ sudo kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 1.12
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.15]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready

I am facing the same issue on Ubuntu 16.04.2, pls let me know, if there are any workarounds

cat /etc/issue

Ubuntu 16.04.2 LTS \n \l

kubeadm version

kubeadm version: &version.Info{Major:“1”, Minor:“7”, GitVersion:“v1.7.2”, GitCommit:“922a86cfcd65915a9b2f69f3f193b8907d741d9c”, GitTreeState:“clean”, BuildDate:“2017-07-21T08:08:00Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}

docker version

Client: Version: 1.12.6 API version: 1.24 Go version: go1.6.2 Git commit: 78d1802 Built: Tue Jan 31 23:35:14 2017 OS/Arch: linux/amd64

Server: Version: 1.12.6 API version: 1.24 Go version: go1.6.2 Git commit: 78d1802 Built: Tue Jan 31 23:35:14 2017 OS/Arch: linux/amd64

kubeadm init

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.7.2 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [kubemaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.100.552.186] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf” [apiclient] Created API client, waiting for the control plane to become ready

Seeing the same with:

  • kubeadm 1.7.1
  • Docker 12.6
  • Ubuntu 16.10 on VirtualBox with bridged NAT and static IP
  • VirtualBox running on a bare metal 16.04 machine, no cloud
  • No Weave … should I even need Weave given that this is the very first node in what will (hopefully) eventually be my cluster?
  • All apt packages upgraded to latest before init

What does this error even mean? Anybody have any suggestions on how to proceed?

It seems the probem with v1.6.0 #43815 , and https://github.com/kubernetes/kubeadm/issues/212