kubernetes: Kubeadm init stuck on [init] This might take a minute or longer if the control plane images have to be pulled.

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

/sig bug

What happened: Kubeadm init hangs at:

[init] Using Kubernetes version: v1.9.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path [preflight] Starting the kubelet service [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.60] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Wrote KubeConfig file to disk: “admin.conf” [kubeconfig] Wrote KubeConfig file to disk: “kubelet.conf” [kubeconfig] Wrote KubeConfig file to disk: “controller-manager.conf” [kubeconfig] Wrote KubeConfig file to disk: “scheduler.conf” [controlplane] Wrote Static Pod manifest for component kube-apiserver to “/etc/kubernetes/manifests/kube-apiserver.yaml” [controlplane] Wrote Static Pod manifest for component kube-controller-manager to “/etc/kubernetes/manifests/kube-controller-manager.yaml” [controlplane] Wrote Static Pod manifest for component kube-scheduler to “/etc/kubernetes/manifests/kube-scheduler.yaml” [etcd] Wrote Static Pod manifest for a local etcd instance to “/etc/kubernetes/manifests/etcd.yaml” [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory “/etc/kubernetes/manifests”. [init] This might take a minute or longer if the control plane images have to be pulled. …

What you expected to happen: Cluster initialized. --> Kubernetes master inilized

How to reproduce it (as minimally and precisely as possible): Install docker Install kubeadm, kubelet, kubectl use kubeadm init command

Anything else we need to know?: Since yesterday when initializing a cluster it automatically used Kubernetes version: v1.9.4. I tried forcing kubeadm to use --kubernetes-version=v1.9.3 but I still have the same issue. Last week it was fine when I reset my Kubernetes cluster and reinitialize it again. I found the issue yesterday when I wanted to reset my cluster again and reinitialize it and it got stuck.

I tried yum update, to update all my software, but still the same issue I used Kubernetes v1.9.3 and after updating today… I’m using Kubernetes v1.9.4

Environment:

  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.4”, GitCommit:“bee2d1505c4fe820744d26d41ecd3fdd4a3d6546”, GitTreeState:“clean”, BuildDate:“2018-03-12T16:29:47Z”, GoVersion:“go1.9.3”, Compiler:“gc”, Platform:“linux/amd64”}

  • OS (e.g. from /etc/os-release): NAME=“CentOS Linux” VERSION=“7 (Core)” ID=“centos” ID_LIKE=“rhel fedora” VERSION_ID=“7” PRETTY_NAME=“CentOS Linux 7 (Core)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:centos:centos:7” HOME_URL=“https://www.centos.org/” BUG_REPORT_URL=“https://bugs.centos.org/

CENTOS_MANTISBT_PROJECT=“CentOS-7” CENTOS_MANTISBT_PROJECT_VERSION=“7” REDHAT_SUPPORT_PRODUCT=“centos” REDHAT_SUPPORT_PRODUCT_VERSION=“7”

  • Kernel (e.g. uname -a): Linux master 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: Kubeadm

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 14
  • Comments: 71 (11 by maintainers)

Most upvoted comments

This issue often comes back on each new release of Kubernetes version. I still don’t know what causes this problem. But since I opened this ticket the kubeadm init gets stuck on each version release and after 3 or 4 days kubeadm init works again like an angel fixes the problem.

I followed advices from this link First make sure you have switched off swap with sudo swapoff -a Then add the following line if it does not exist to /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false" Restart the kubelet service and docker service with systemctl restart docker && systemctl restart kubelet.service

Now run kubeadm init

Today I was trying to setup kubernetes cluster on Raspberry PI and encountered the same issue. I think the problem is that on apiserver fails all the time to finish it’s own configuration during two minutes, and as the result kubelet killing it all the time. And after 4-5 minutes kubeadm also timeouts.

To fix this issue I have used next strategy. Once kubeadm enters init stage( “[init] This might take a minute or longer if the control plane images have to be pulled” is printed) I immediately update kubeapiserver manifest file by running next command:

sed -i ‘s/failureThreshold: 8/failureThreshold: 20/g’ /etc/kubernetes/manifests/kube-apiserver.yaml

Then I have just killed current kube-apiserver container (docker kill).

After that it took near 3 minutes for apiserver to actually startup and kubeadm managed to continue its work.

I’m getting the same issue, running on a Raspberry Pi 3 Model B+

Here are the logs:

pi@master01:~/k8s $ sudo kubeadm init --config kubeadm_conf.yaml
[init] Using Kubernetes version: v1.10.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.227]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master01] and IPs [192.168.0.227]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

It hangs here and it is frustrating!

@jmreicha yeah thats what I found with 1.10.2 as well. The switch of etcd from HTTP to HTTPS seems to be the main source of this issue. I tried starting a etcd instance using docker and the same options as the manifest and it all ran fine. I was also able to docker exec into the container and run the health check command without issue. Unfortunately doing the same docker exec into the kubelet managed container is more hit and miss, sometimes it would work, usually after it had just started and sometimes it would error out with a grpc timeout. Usually when the grpc timeouts where happening lsof would show a large number of connections between etcd and apiserver though the logs wouldn’t suggest they were actually talking to each other. After a short period of time I think the etcd kubelet health check fails would cause the kubelet to shutdown the etcd instance, the logs on the etcd appear to suggest the instance is being instructed to shutdown rather than say crashing. I’ve never been able to make enough sense out of the apiserver logs to work out whats actually going on with it.

I know the crypto on ARM64 is a bit slow (lacking ASM implementations), I believe Golang is working on that at the moment, but that probably won’t land until 1.11 at least and it looks like etcd is still on Go 1.8.5. I’ve been wondering if the reduced speed of the crypto is therefore exceeding some hardwired TLS timeouts in etcd and k8s.

I’m not quite sure what is killing etcd though, I’ve got the exited etcd container log below, just need to work out the source of its “exit (0)” status

2018-04-11 22:22:59.911070 W | etcdmain: running etcd on unsupported architecture "arm64" since ETCD_UNSUPPORTED_ARCH is set
2018-04-11 22:23:00.195791 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=arm64
2018-04-11 22:23:00.196870 I | etcdmain: etcd Version: 3.1.12
2018-04-11 22:23:00.197647 I | etcdmain: Git SHA: 918698a
2018-04-11 22:23:00.198133 I | etcdmain: Go Version: go1.8.5
2018-04-11 22:23:00.199211 I | etcdmain: Go OS/Arch: linux/arm64
2018-04-11 22:23:00.200036 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2018-04-11 22:23:00.201240 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2018-04-11 22:23:00.206591 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2018-04-11 22:23:00.207050 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2018-04-11 22:23:00.243879 I | embed: listening for peers on http://localhost:2380
2018-04-11 22:23:00.245035 I | embed: listening for client requests on 127.0.0.1:2379
2018-04-11 22:23:01.248754 W | etcdserver: another etcd process is running with the same data dir and holding the file lock.
2018-04-11 22:23:01.248945 W | etcdserver: waiting for it to exit before starting...
2018-04-11 22:23:11.452299 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-04-11 22:23:11.497000 I | etcdserver: name = default
2018-04-11 22:23:11.497173 I | etcdserver: data dir = /var/lib/etcd
2018-04-11 22:23:11.497234 I | etcdserver: member dir = /var/lib/etcd/member
2018-04-11 22:23:11.497273 I | etcdserver: heartbeat = 100ms
2018-04-11 22:23:11.497310 I | etcdserver: election = 1000ms
2018-04-11 22:23:11.497352 I | etcdserver: snapshot count = 10000
2018-04-11 22:23:11.497571 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2018-04-11 22:23:11.497823 I | etcdserver: initial advertise peer URLs = http://localhost:2380
2018-04-11 22:23:11.498199 I | etcdserver: initial cluster = default=http://localhost:2380
2018-04-11 22:23:20.995857 W | wal: sync duration of 1.320885532s, expected less than 1s
2018-04-11 22:23:21.217165 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
2018-04-11 22:23:21.217423 I | raft: 8e9e05c52164694d became follower at term 0
2018-04-11 22:23:21.217575 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-04-11 22:23:21.217657 I | raft: 8e9e05c52164694d became follower at term 1
2018-04-11 22:23:21.873572 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-04-11 22:23:21.880938 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2018-04-11 22:23:21.881174 I | etcdserver: starting server... [version: 3.1.12, cluster version: to_be_decided]
2018-04-11 22:23:21.881395 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2018-04-11 22:23:21.890640 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2018-04-11 22:23:21.919740 I | raft: 8e9e05c52164694d is starting a new election at term 1
2018-04-11 22:23:21.920156 I | raft: 8e9e05c52164694d became candidate at term 2
2018-04-11 22:23:21.920293 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
2018-04-11 22:23:21.920430 I | raft: 8e9e05c52164694d became leader at term 2
2018-04-11 22:23:21.920576 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
2018-04-11 22:23:21.922068 I | etcdserver: setting up the initial cluster version to 3.1
2018-04-11 22:23:22.367497 N | etcdserver/membership: set the initial cluster version to 3.1
2018-04-11 22:23:22.367751 I | embed: ready to serve client requests
2018-04-11 22:23:22.368025 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2018-04-11 22:23:22.368094 I | etcdserver/api: enabled capabilities for version 3.1
2018-04-11 22:23:22.368196 W | etcdserver: apply entries took too long [283.993281ms for 2 entries]
2018-04-11 22:23:22.368237 W | etcdserver: avoid queries with large range/delete range!
2018-04-11 22:23:22.391108 I | embed: serving client requests on 127.0.0.1:2379
2018-04-11 22:25:31.629821 N | pkg/osutil: received terminated signal, shutting down...
2018-04-11 22:25:31.630957 I | etcdserver: skipped leadership transfer for single member cluster

1.9.5 has the same issue possible to investigate the issue? what are steps we have to take in the future when an update comes out?

Out of curiosity, I just hooked up a pi 3B+ I forgot I had and tried installing the master on it. Interestingly, the master came up using k8s version 1.10.2 and Docker 18.04 but k8s 1.10.3 seems to still be broken using the RPi.

Then I was able to join the remaining Pine64s to the cluster as workers. This isn’t an ideal setup but at least gets me a 1.10 cluster for now. Still don’t know what’s different between Pine64 and RPi packages/hardware and why it decided to work on RPi but thought it might be helpful for others.

I dug a little bit deeper into this but am still stuck. It looks like the manifests/etcd.yaml and manifests/kube-apiserver.yaml configs were changed between 1.9 and 1.10.

I ran a --dry-run using both the 1.9 and 1.10 versions of kubeadm and it seems like the etcd health check was changed as well as turning on certificate auth. At this point I’m thinking that this change is what is causing the issue. For example,

1.9 etcd.yaml

spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=http://127.0.0.1:2379
    - --data-dir=/var/lib/etcd
    - --listen-client-urls=http://127.0.0.1:2379
    image: gcr.io/google_containers/etcd-arm64:3.1.11
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd

1.10 etcd.yaml

spec:
  containers:
  - command:
    - etcd
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --advertise-client-urls=https://127.0.0.1:2379
    - --client-cert-auth=true
    - --peer-client-cert-auth=true
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --listen-client-urls=https://127.0.0.1:2379
    - --data-dir=/var/lib/etcd
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    image: k8s.gcr.io/etcd-arm64:3.1.12
    livenessProbe:
      exec:
        command:
        - /bin/sh
        - -ec
        - ETCDCTL_API=3 etcdctl --endpoints=127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
          --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
          get foo
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd

The kube-apiserver has also been updated to use this https etcd, instead of the http version that is used in 1.9.

I can get the kubeadm init to finish bootstrapping by creating a config file and overriding all of the etcd urls with http endpoints but the etcd and apiserver containers still crashloop.

Unfortunately I’m not sure how to fix this, but would love to get it figured out.

Downgrading docker-ce to 17.x.x version solves the problem.

sudo aptitude install docker-ce=17.12.1~ce-0~raspbian

For me the problem was etcd tried to bind to the ip address looked up through localhost.somedomain. Commenting out the search line in /etc/resolv.conf worked:

sudo sed -e 's/^search/#search/' -i /etc/resolv.conf

I am also having same issues for a couple of weeks now 😦 Been trying many versions (Raspbian/Docker/K8s) and no luck on raspberry Pi2 with all the obvious things checked 10 times.

Below are my logs for apiserver and etcd within docker (other things crash as 6443 no longer listening after api-server stops)

Just to confirm I have:

  • Swap disabled
  • /boot/cmdline.txt appended with ’ cgroup_enable=cpuset cgroup_enable=memory’ and system rebooted
  • Name changed to ‘pi2’ (and )

If anyone has any insight (into either further troubleshooting steps or the issue) it would be much appreciated…

CONTAINER ID        IMAGE                                                  COMMAND                  CREATED              STATUS              PORTS               NAMES
589d1a850640        gcr.io/google_containers/kube-apiserver-arm            "kube-apiserver --al…"   About a minute ago   Up About a minute                       k8s_kube-apiserver_kube-apiserver-pi2_kube-system_e349ee41b4a0090c3ef9d5e3dc1f3f6b_19
673a4530b036        gcr.io/google_containers/etcd-arm                      "etcd --listen-clien…"   About an hour ago    Up About an hour                        k8s_etcd_etcd-pi2_kube-system_dd6e9824213479006d8027193266e3b2_0
eef10aa0cdce        gcr.io/google_containers/kube-controller-manager-arm   "kube-controller-man…"   About an hour ago    Up About an hour                        k8s_kube-controller-manager_kube-controller-manager-pi2_kube-system_454c01891158c25146f7f14a62aadeaa_0
c176c07e9294        gcr.io/google_containers/kube-scheduler-arm            "kube-scheduler --ad…"   About an hour ago    Up About an hour                        k8s_kube-scheduler_kube-scheduler-pi2_kube-system_eaf7b4fa4d244f571f4429f476ecb6a2_0
03d3f19b0500        gcr.io/google_containers/pause-arm:3.0                 "/pause"                 About an hour ago    Up About an hour                        k8s_POD_kube-controller-manager-pi2_kube-system_454c01891158c25146f7f14a62aadeaa_0
94d9afa04fc3        gcr.io/google_containers/pause-arm:3.0                 "/pause"                 About an hour ago    Up About an hour                        k8s_POD_kube-apiserver-pi2_kube-system_e349ee41b4a0090c3ef9d5e3dc1f3f6b_0
a56cbef4c32d        gcr.io/google_containers/pause-arm:3.0                 "/pause"                 About an hour ago    Up About an hour                        k8s_POD_etcd-pi2_kube-system_dd6e9824213479006d8027193266e3b2_0
4b9099c9aa0a        gcr.io/google_containers/pause-arm:3.0                 "/pause"                 About an hour ago    Up About an hour                        k8s_POD_kube-scheduler-pi2_kube-system_eaf7b4fa4d244f571f4429f476ecb6a2_0
pi@pi2:~ $ sudo docker logs 673a4530b036
2018-04-29 12:50:40.551154 W | etcdmain: running etcd on unsupported architecture "arm" since ETCD_UNSUPPORTED_ARCH is set
2018-04-29 12:50:40.563613 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=arm
2018-04-29 12:50:40.565348 I | etcdmain: etcd Version: 3.1.11
2018-04-29 12:50:40.565568 I | etcdmain: Git SHA: 960f460
2018-04-29 12:50:40.565697 I | etcdmain: Go Version: go1.7.6
2018-04-29 12:50:40.565804 I | etcdmain: Go OS/Arch: linux/arm
2018-04-29 12:50:40.565949 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
2018-04-29 12:50:40.577529 I | embed: listening for peers on http://localhost:2380
2018-04-29 12:50:40.579619 I | embed: listening for client requests on 127.0.0.1:2379
2018-04-29 12:50:40.613671 I | etcdserver: name = default
2018-04-29 12:50:40.613924 I | etcdserver: data dir = /var/lib/etcd
2018-04-29 12:50:40.614090 I | etcdserver: member dir = /var/lib/etcd/member
2018-04-29 12:50:40.614219 I | etcdserver: heartbeat = 100ms
2018-04-29 12:50:40.614345 I | etcdserver: election = 1000ms
2018-04-29 12:50:40.614456 I | etcdserver: snapshot count = 10000
2018-04-29 12:50:40.614655 I | etcdserver: advertise client URLs = http://127.0.0.1:2379
2018-04-29 12:50:40.614810 I | etcdserver: initial advertise peer URLs = http://localhost:2380
2018-04-29 12:50:40.615171 I | etcdserver: initial cluster = default=http://localhost:2380
2018-04-29 12:50:40.654587 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
2018-04-29 12:50:40.656507 I | raft: 8e9e05c52164694d became follower at term 0
2018-04-29 12:50:40.657937 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2018-04-29 12:50:40.659182 I | raft: 8e9e05c52164694d became follower at term 1
2018-04-29 12:50:40.903630 I | etcdserver: starting server... [version: 3.1.11, cluster version: to_be_decided]
2018-04-29 12:50:40.909018 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2018-04-29 12:50:41.062445 I | raft: 8e9e05c52164694d is starting a new election at term 1
2018-04-29 12:50:41.063372 I | raft: 8e9e05c52164694d became candidate at term 2
2018-04-29 12:50:41.064553 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
2018-04-29 12:50:41.065363 I | raft: 8e9e05c52164694d became leader at term 2
2018-04-29 12:50:41.066349 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
2018-04-29 12:50:41.068918 I | embed: ready to serve client requests
2018-04-29 12:50:41.072339 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2018-04-29 12:50:41.075729 I | etcdserver: published {Name:default ClientURLs:[http://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2018-04-29 12:50:41.078595 I | etcdserver: setting up the initial cluster version to 3.1
2018-04-29 12:50:41.085539 N | etcdserver/membership: set the initial cluster version to 3.1
2018-04-29 12:50:41.086306 I | etcdserver/api: enabled capabilities for version 3.1
pi@pi2:~ $ date
Sun 29 Apr 13:56:35 UTC 2018
pi@pi2:~ $ sudo docker logs a56cbef4c32d
pi@pi2:~ $ sudo docker logs 589d1a850640
I0429 13:51:56.129947       1 server.go:121] Version: v1.9.7
I0429 13:52:14.166056       1 feature_gate.go:190] feature gates: map[Initializers:true]
I0429 13:52:14.166369       1 initialization.go:90] enabled Initializers feature as part of admission plugin setup
I0429 13:52:14.239682       1 master.go:225] Using reconciler: master-count
W0429 13:52:15.035595       1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources.
W0429 13:52:15.176396       1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0429 13:52:15.189460       1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0429 13:52:15.337137       1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/04/29 13:52:15 log.go:33: [restful/swagger] listing is available at https://192.168.171.200:6443/swaggerapi
[restful] 2018/04/29 13:52:15 log.go:33: [restful/swagger] https://192.168.171.200:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/04/29 13:52:28 log.go:33: [restful/swagger] listing is available at https://192.168.171.200:6443/swaggerapi
[restful] 2018/04/29 13:52:28 log.go:33: [restful/swagger] https://192.168.171.200:6443/swaggerui/ is mapped to folder /swagger-ui/
I0429 13:53:15.447570       1 serve.go:96] Serving securely on [::]:6443
I0429 13:53:15.448207       1 available_controller.go:262] Starting AvailableConditionController
I0429 13:53:15.448430       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0429 13:53:15.448904       1 crd_finalizer.go:242] Starting CRDFinalizer
I0429 13:53:15.449688       1 apiservice_controller.go:112] Starting APIServiceRegistrationController
I0429 13:53:15.450412       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0429 13:53:15.458265       1 naming_controller.go:274] Starting NamingConditionController
I0429 13:53:15.459551       1 crdregistration_controller.go:110] Starting crd-autoregister controller
I0429 13:53:15.459701       1 controller_utils.go:1019] Waiting for caches to sync for crd-autoregister controller
I0429 13:53:15.464817       1 controller.go:84] Starting OpenAPI AggregationController
I0429 13:53:15.449757       1 customresource_discovery_controller.go:152] Starting DiscoveryController
I0429 13:53:17.837683       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52962: EOF
I0429 13:53:17.983653       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52790: EOF
I0429 13:53:18.103428       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53054: EOF
I0429 13:53:18.115865       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53096: EOF
I0429 13:53:18.148235       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53130: EOF
I0429 13:53:18.471384       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52796: EOF
I0429 13:53:18.795277       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52970: EOF
I0429 13:53:18.830245       1 logs.go:41] http: TLS handshake error from 127.0.0.1:52918: EOF
I0429 13:53:19.020850       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53152: EOF
I0429 13:53:19.359088       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53052: EOF
I0429 13:53:19.405050       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53044: EOF
I0429 13:53:19.590783       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52966: EOF
I0429 13:53:19.659343       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53016: EOF
I0429 13:53:19.887481       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52734: EOF
I0429 13:53:20.005243       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52788: EOF
I0429 13:53:20.055663       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52750: EOF
I0429 13:53:20.306967       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53004: EOF
I0429 13:53:20.737716       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53084: EOF
I0429 13:53:20.769785       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53148: EOF
I0429 13:53:21.024955       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52798: EOF
I0429 13:53:21.115743       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53058: EOF
I0429 13:53:21.118478       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53006: EOF
I0429 13:53:21.159008       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53126: EOF
I0429 13:53:21.160531       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53008: EOF
I0429 13:53:21.289519       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53012: EOF
I0429 13:53:21.368318       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52746: EOF
I0429 13:53:21.414948       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53018: EOF
I0429 13:53:21.455582       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53036: EOF
I0429 13:53:21.467716       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53086: EOF
I0429 13:53:21.637202       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53024: EOF
I0429 13:53:21.685486       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52786: EOF
I0429 13:53:21.730288       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53130: EOF
I0429 13:53:21.974141       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53064: EOF
I0429 13:53:21.980781       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53144: EOF
I0429 13:53:22.108692       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52932: EOF
I0429 13:53:22.122174       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52732: EOF
I0429 13:53:22.156981       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52742: EOF
I0429 13:53:22.176540       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52752: EOF
I0429 13:53:22.304145       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52950: EOF
I0429 13:53:22.374592       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53014: EOF
I0429 13:53:22.468366       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52794: EOF
I0429 13:53:22.541673       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52748: EOF
I0429 13:53:22.586888       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52964: EOF
I0429 13:53:22.588463       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52940: EOF
I0429 13:53:22.676689       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52730: EOF
I0429 13:53:22.740637       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53104: EOF
I0429 13:53:22.824036       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53060: EOF
I0429 13:53:22.881591       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52758: EOF
I0429 13:53:22.967194       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53108: EOF
I0429 13:53:23.044465       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53094: EOF
I0429 13:53:23.071576       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52760: EOF
I0429 13:53:23.151402       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52766: EOF
I0429 13:53:23.239746       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52938: EOF
I0429 13:53:23.265224       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53088: EOF
I0429 13:53:23.274635       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52774: EOF
I0429 13:53:23.298669       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53176: EOF
I0429 13:53:23.314835       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52784: EOF
I0429 13:53:23.346149       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52972: EOF
I0429 13:53:23.347803       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53082: EOF
I0429 13:53:23.405529       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52768: EOF
I0429 13:53:23.440187       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52946: EOF
I0429 13:53:23.560409       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52944: EOF
I0429 13:53:23.565675       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52792: EOF
I0429 13:53:23.598682       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53150: EOF
I0429 13:53:23.660086       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52956: EOF
I0429 13:53:23.675836       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52996: EOF
I0429 13:53:23.684114       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52802: EOF
I0429 13:53:23.775871       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53048: EOF
I0429 13:53:23.811232       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52992: EOF
I0429 13:53:23.840390       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53128: EOF
I0429 13:53:23.901550       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52780: EOF
I0429 13:53:23.922143       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52952: EOF
I0429 13:53:23.961493       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52968: EOF
I0429 13:53:23.982601       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53046: EOF
I0429 13:53:24.013616       1 logs.go:41] http: TLS handshake error from 127.0.0.1:52916: EOF
I0429 13:53:24.040222       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52756: EOF
I0429 13:53:24.078830       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0429 13:53:24.152779       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52782: EOF
I0429 13:53:24.156273       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53030: EOF
I0429 13:53:24.157178       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53128: EOF
I0429 13:53:24.161485       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53092: EOF
I0429 13:53:24.203300       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52978: EOF
I0429 13:53:24.219462       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52948: EOF
I0429 13:53:24.243813       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52738: EOF
I0429 13:53:24.256769       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52960: EOF
I0429 13:53:24.296070       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52990: EOF
I0429 13:53:24.302921       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52744: EOF
I0429 13:53:24.304393       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53038: EOF
I0429 13:53:24.336324       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52770: EOF
I0429 13:53:24.491170       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53178: EOF
I0429 13:53:24.509865       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53082: EOF
I0429 13:53:24.517630       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53062: EOF
I0429 13:53:24.535353       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52800: EOF
I0429 13:53:24.555623       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53078: EOF
I0429 13:53:24.555947       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53066: EOF
I0429 13:53:24.594670       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53100: EOF
I0429 13:53:24.627459       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52754: EOF
I0429 13:53:24.647998       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53042: EOF
I0429 13:53:24.658392       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53106: EOF
I0429 13:53:24.699336       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53188: EOF
I0429 13:53:24.738234       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53224: EOF
I0429 13:53:24.798999       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53050: EOF
I0429 13:53:24.817274       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53000: EOF
I0429 13:53:24.834344       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52848: EOF
I0429 13:53:24.847782       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53230: EOF
I0429 13:53:24.870484       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53002: EOF
I0429 13:53:24.874201       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52998: EOF
I0429 13:53:24.888555       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53222: EOF
I0429 13:53:24.903425       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53056: EOF
I0429 13:53:24.908759       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53132: EOF
I0429 13:53:24.935374       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53134: EOF
I0429 13:53:24.959919       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53124: EOF
I0429 13:53:24.991112       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53138: EOF
I0429 13:53:24.994239       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52954: EOF
I0429 13:53:24.994784       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52764: EOF
I0429 13:53:25.028079       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53040: EOF
I0429 13:53:25.028369       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53146: EOF
I0429 13:53:25.040762       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53140: EOF
I0429 13:53:25.046147       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53142: EOF
I0429 13:53:25.049716       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52958: EOF
I0429 13:53:25.080764       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53102: EOF
I0429 13:53:25.089897       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53090: EOF
I0429 13:53:25.110103       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53084: EOF
I0429 13:53:25.138663       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53098: EOF
I0429 13:53:25.158844       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53010: EOF
I0429 13:53:25.166592       1 logs.go:41] http: TLS handshake error from 192.168.171.200:52988: EOF
I0429 13:53:25.195803       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53136: EOF
I0429 13:53:25.341608       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53092: EOF
I0429 13:53:25.350343       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53110: EOF
I0429 13:53:25.486358       1 logs.go:41] http: TLS handshake error from 192.168.171.200:53140: EOF
E0429 13:53:25.492740       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Namespace: Get https://127.0.0.1:6443/api/v1/namespaces?limit=500&resourceVersion=0: net/http: TLS handshake timeout
E0429 13:53:25.522177       1 controller_utils.go:1022] Unable to sync caches for crd-autoregister controller
I0429 13:53:25.522369       1 crdregistration_controller.go:115] Shutting down crd-autoregister controller
I0429 13:53:25.522572       1 naming_controller.go:278] Shutting down NamingConditionController
E0429 13:53:25.522844       1 cache.go:35] Unable to sync caches for AvailableConditionController controller
I0429 13:53:25.523083       1 crd_finalizer.go:246] Shutting down CRDFinalizer
I0429 13:53:25.524607       1 apiservice_controller.go:124] Shutting down APIServiceRegistrationController
E0429 13:53:25.530106       1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: Get https://127.0.0.1:6443/api/v1/endpoints?limit=500&resourceVersion=0: net/http: TLS handshake timeout
I0429 13:53:25.552406       1 controller.go:90] Shutting down OpenAPI AggregationController
I0429 13:53:25.554171       1 serve.go:136] Stopped listening on [::]:6443
I0429 13:53:25.556089       1 available_controller.go:266] Shutting down AvailableConditionController
I0429 13:53:25.561504       1 logs.go:41] http: TLS handshake error from 127.0.0.1:53288: EOF
E0429 13:53:25.491752       1 customresource_discovery_controller.go:155] timed out waiting for caches to sync
I0429 13:53:25.575971       1 customresource_discovery_controller.go:156] Shutting down DiscoveryController
E0429 13:53:25.624774       1 storage_rbac.go:179] unable to initialize clusterrolebindings: Get https://127.0.0.1:6443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings: dial tcp 127.0.0.1:6443: getsockopt: connection refused
pi@pi2:~ $ 

Docker info:

pi@pi2:~ $ sudo docker info
Containers: 9
 Running: 8
 Paused: 0
 Stopped: 1
Images: 5
Server Version: 18.04.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.14.34-v7+
Operating System: Raspbian GNU/Linux 9 (stretch)
OSType: linux
Architecture: armv7l
CPUs: 4
Total Memory: 927.2MiB
Name: pi2
ID: 7S4P:WHCY:KMFL:FXA7:HTPQ:HKCH:522O:HNFM:2H4V:2FKB:LYKC:RYYT
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
pi@pi2:~ $ 

kubernetes version… (have tried various versions incl 1.10.2, 1.10.1, 1.10.0, 1.9.x, …)

pi@pi2:~ $ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/arm"}
pi@pi2:~ $ 

Example of exact output of kubeadm init (leading up to hang + after it eventually times out)

pi@pi2:~ $ sudo kubeadm init --token-ttl=0
[init] Using Kubernetes version: v1.10.2
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.04.0-ce. Max validated version: 17.03
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [pi2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.171.200]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [pi2] and IPs [192.168.171.200]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	- Either there is no internet connection, or imagePullPolicy is set to "Never",
	  so the kubelet cannot pull or find the following control plane images:
		- k8s.gcr.io/kube-apiserver-arm:v1.10.2
		- k8s.gcr.io/kube-controller-manager-arm:v1.10.2
		- k8s.gcr.io/kube-scheduler-arm:v1.10.2
		- k8s.gcr.io/etcd-arm:3.1.12 (only if no external etcd endpoints are configured)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
pi@pi2:~ $

Effective Install process (/boot/cmdline.txt is already updated in my disk images as are adding the ssh authorized_key etc…)…

curl -fsSL get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod pi -aG docker

sudo dphys-swapfile swapoff && \\
sudo dphys-swapfile uninstall && \\
sudo update-rc.d dphys-swapfile remove

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \\
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee  && \\ /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update

#sudo apt-get install -y kubeadm=1.9.6-00 kubectl=1.9.6-00 kubelet=1.9.6-00 # or similar if forcing version if not ...
sudo apt-get install -y kubeadm

sudo kubeadm init --token-ttl=0

I’ve experienced the same, but only after rebooting the box. It was all ok when I’ve created the CentOS instance in OpenStack and installed k8s on it. But when I tried to install k8s after rebooting the instance, or I rebooted after k8s install, k8s did not work anymore/ install hung as described above. Apiserver was trying to come up, and then timed out an stopped. It turned out that the problem was SElinux as described here: https://github.com/kubernetes/kubeadm/issues/417 It all works fine after I’ve set SELINUX=permissive in /etc/selinux/config

I’ve turned up the logging verbosity on kubelet, seeing these probe messages now

Apr 16 23:03:40 rpi3-01 kubelet[7270]: I0416 23:03:40.663747    7270 prober.go:111] Liveness probe for "etcd-rpi3-01_kube-system(6e46efd4679b5b164d314731383c326d):etcd" failed (failure): 2018-04-16 23:03:38.529485 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
Apr 16 23:03:40 rpi3-01 kubelet[7270]: Error:  grpc: timed out when dialing

It looks like this might be related to the healthcheck-client.crt not having a SAN field for IP 127.0.0.1, but I’m not sure. Need to create a similar certificate to the existing one and try that, will check that in the morning.

kubeadm init --apiserver-advertise-address 192.168.33.11 --pod-network-cidr=192.168.0.0/16 [init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks error execution phase preflight: [preflight] Some fatal errors occurred: [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

I am getting this error when i am running kubeadm init on master or any other machine. Previously it was worked well.

os ubuntu 16.04, 18.04, kubeadm verson 1.10.5 add no_proxy at /etc/environment solved my problem! no_proxy=“localhost,127.0.0.1,…”

@FFFEGO - Thanks Vlad, I tried fresh from that guide still same issue.

Maybe my internet connection is too slow @~8mbps ???

I saw a few messages about under voltage in journalctl but after increasing to better power supply, and repeating, I still have same issue (under-voltage messages gone though)

What changes did you make to switch the cgroup driver? I’ve tried the following docker override

# /etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --exec-opt native.cgroupdriver=systemd

and passing --cgroup-driver=systemd (with and without --runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice) to kubelet, and still ended up with the stalled kubeadm init.

on a side note, does anyone know how we can make kubeadm logs more verbose to understand what/why its actually hosed ? many thanks

Same for v1.10.0 and v1.11.0 too.

# ./kubeadm init --kubernetes-version 1.10.0 
[init] Using Kubernetes version: v1.10.0
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03
	[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[preflight] Starting the kubelet service
[certificates] Using the existing ca certificate and key.
[certificates] Using the existing apiserver certificate and key.
[certificates] Using the existing apiserver-kubelet-client certificate and key.
[certificates] Using the existing etcd/ca certificate and key.
[certificates] Using the existing etcd/server certificate and key.
[certificates] Using the existing etcd/peer certificate and key.
[certificates] Using the existing etcd/healthcheck-client certificate and key.
[certificates] Using the existing apiserver-etcd-client certificate and key.
[certificates] Using the existing sa key.
[certificates] Using the existing front-proxy-ca certificate and key.
[certificates] Using the existing front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.