kind: kind create cluster fails to create cluster on default install

Running kind create cluster just after go get sigs.k8s.io/kind install produces this error:

$ kind create cluster
Creating cluster "kind" ...
 βœ“ Ensuring node image (kindest/node:v1.13.3) πŸ–Ό 
 βœ“ [control-plane] Creating node container πŸ“¦ 
 βœ“ [control-plane] Fixing mounts πŸ—» 
 βœ“ [control-plane] Starting systemd πŸ–₯
 βœ“ [control-plane] Waiting for docker to be ready πŸ‹ 
 βœ“ [control-plane] Pre-loading images πŸ‹ 
 βœ“ [control-plane] Creating the kubeadm config file β›΅
ERRO[14:34:06] failed to init node with kubeadm: exit status 1  ☸ 
 βœ— [control-plane] Starting Kubernetes (this may take a minute) ☸
ERRO[14:34:06] failed to init node with kubeadm: exit status 1 
Error: failed to create cluster: failed to init node with kubeadm: exit status 1

System info:

  • openSUSE 42.3 (x86_64)
  • go1.11.2 linux/amd64
  • Docker 17.09.1-ce

Debug output:

$ kind create cluster --loglevel debug
DEBU[14:38:01] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}] 
Creating cluster "kind" ...
DEBU[14:38:01] Running: /usr/bin/docker [docker inspect --type=image kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c] 
INFO[14:38:01] Image: kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c present locally 
 βœ“ Ensuring node image (kindest/node:v1.13.3) πŸ–Ό
DEBU[14:38:01] Running: /usr/bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[14:38:01] Running: /usr/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --entrypoint=/usr/local/bin/entrypoint --expose 45760 -p 45760:6443 kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c /sbin/init] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane rm -f /etc/machine-id] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane systemd-machine-id-setup] 
 βœ“ [control-plane] Creating node container πŸ“¦
DEBU[14:38:01] Running: /usr/bin/docker [docker info --format '{{json .SecurityOptions}}'] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount -o remount,ro /sys] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /run] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /var/lib/docker] 
 βœ“ [control-plane] Fixing mounts πŸ—»
DEBU[14:38:01] Running: /usr/bin/docker [docker kill -s SIGUSR1 kind-control-plane] 
 βœ“ [control-plane] Starting systemd πŸ–₯ 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
DEBU[14:38:02] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker] 
 βœ“ [control-plane] Waiting for docker to be ready πŸ‹
DEBU[14:38:02] Running: /usr/bin/docker [docker exec --privileged kind-control-plane find /kind/images -name *.tar -exec docker load -i {} ;] 
DEBU[14:38:05] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane cat /kind/version] 
 βœ“ [control-plane] Pre-loading images πŸ‹
INFO[14:38:05] Using KubeadmConfig:

apiServer:
  certSANs:
  - localhost
apiVersion: kubeadm.k8s.io/v1beta1
clusterName: kind
kind: ClusterConfiguration
kubernetesVersion: v1.13.3
---
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration

 
DEBU[14:38:05] Running: /usr/bin/docker [docker cp /tmp/043838321 kind-control-plane:/kind/kubeadm.conf] 
 βœ“ [control-plane] Creating the kubeadm config file β›΅
DEBU[14:38:05] Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf] 
ERRO[14:42:30] failed to init node with kubeadm: exit status 1  ☸ 
 βœ— [control-plane] Starting Kubernetes (this may take a minute) ☸
ERRO[14:42:30] failed to init node with kubeadm: exit status 1 
DEBU[14:42:30] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind] 
DEBU[14:42:30] Running: /usr/bin/docker [docker rm -f -v kind-control-plane] 
Error: failed to create cluster: failed to init node with kubeadm: exit status 1

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 46 (43 by maintainers)

Commits related to this issue

Most upvoted comments

I’m seeing this same error with an older node image kindest/node:v1.11.3. From exec’ing into the node I can see that Docker never started up (but I have not established why). The same kind code is working great with an up-to-date node image kindest/node:v1.14.1.

I know kind is a work in progress but it would be marvellous if it could aim to support the same previous 3 minor releases as Kubernetes itself. (We’re using kind to test kube-bench, and it is super-useful!)

Just downloaded the last version as suggested, and now it works. Thanks a lot!!

[root@test-1nodo-custom ~]# kind create cluster --retain
Creating cluster "kind" ...
 βœ“ Ensuring node image (kindest/node:v1.13.3) πŸ–Ό
 βœ“ [control-plane] Creating node container πŸ“¦
 βœ“ [control-plane] Fixing mounts πŸ—»
 βœ“ [control-plane] Starting systemd πŸ–₯
 βœ“ [control-plane] Waiting for docker to be ready πŸ‹
 βœ“ [control-plane] Pre-loading images πŸ‹
 βœ“ [control-plane] Creating the kubeadm config file β›΅
 βœ“ [control-plane] Starting Kubernetes (this may take a minute) ☸
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info

I’ve changed cgroup driver to system. It didn’t help. I still see this in the kubelet log:

Feb 15 16:38:10 kind-control-plane kubelet[774]: E0215 16:38:10.367505     774 certificate_manager.go:348] Failed while requesting a signed certificate from the master: cannot cr
eate certificate signing request: Post https://172.17.0.2:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 172.17.0.2:6443: connect: connection refused

I’ll try to add NO_PROXY to pkg/cluster/nodes/create.go the same way as HTTP_PROXY and HTTPS_PROXY.

When 0.3 is out (soon!) I’ll make sure we have up to date node images listed somewhere for the latest Kubernetes 1.11.10, 1.12.8, 1.13.6, 1.14.2, none of which are currently published kind images, along with some better docs about these πŸ˜…

Just trying to finish landing podSubnet cleanup and NO_PROXY fixes.

just checked it, thanks. Neither of those issues seem to be my case. ext4 partitions, plenty of disk space and memory, no β€œfailed to apply overlay network” message on the console, no β€œfailed to build images” message either. See debug output above.

latest master branch contains a patch by @pablochacin that passes HTTP*_PROXY env variables to nodes: https://github.com/kubernetes-sigs/kind/pull/275/files