kind: kind create cluster fails to create cluster on default install
Running kind create cluster
just after go get sigs.k8s.io/kind
install produces this error:
$ kind create cluster
Creating cluster "kind" ...
β Ensuring node image (kindest/node:v1.13.3) πΌ
β [control-plane] Creating node container π¦
β [control-plane] Fixing mounts π»
β [control-plane] Starting systemd π₯
β [control-plane] Waiting for docker to be ready π
β [control-plane] Pre-loading images π
β [control-plane] Creating the kubeadm config file β΅
ERRO[14:34:06] failed to init node with kubeadm: exit status 1 βΈ
β [control-plane] Starting Kubernetes (this may take a minute) βΈ
ERRO[14:34:06] failed to init node with kubeadm: exit status 1
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
System info:
- openSUSE 42.3 (x86_64)
- go1.11.2 linux/amd64
- Docker 17.09.1-ce
Debug output:
$ kind create cluster --loglevel debug
DEBU[14:38:01] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}}]
Creating cluster "kind" ...
DEBU[14:38:01] Running: /usr/bin/docker [docker inspect --type=image kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c]
INFO[14:38:01] Image: kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c present locally
β Ensuring node image (kindest/node:v1.13.3) πΌ
DEBU[14:38:01] Running: /usr/bin/docker [docker info --format '{{json .SecurityOptions}}']
DEBU[14:38:01] Running: /usr/bin/docker [docker run -d --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kind-control-plane --name kind-control-plane --label io.k8s.sigs.kind.cluster=kind --label io.k8s.sigs.kind.role=control-plane --entrypoint=/usr/local/bin/entrypoint --expose 45760 -p 45760:6443 kindest/node:v1.13.3@sha256:d1af504f20f3450ccb7aed63b67ec61c156f9ed3e8b0d973b3dee3c95991753c /sbin/init]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane rm -f /etc/machine-id]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane systemd-machine-id-setup]
β [control-plane] Creating node container π¦
DEBU[14:38:01] Running: /usr/bin/docker [docker info --format '{{json .SecurityOptions}}']
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount -o remount,ro /sys]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /run]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged kind-control-plane mount --make-shared /var/lib/docker]
β [control-plane] Fixing mounts π»
DEBU[14:38:01] Running: /usr/bin/docker [docker kill -s SIGUSR1 kind-control-plane]
β [control-plane] Starting systemd π₯
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]
DEBU[14:38:01] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]
DEBU[14:38:02] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane systemctl is-active docker]
β [control-plane] Waiting for docker to be ready π
DEBU[14:38:02] Running: /usr/bin/docker [docker exec --privileged kind-control-plane find /kind/images -name *.tar -exec docker load -i {} ;]
DEBU[14:38:05] Running: /usr/bin/docker [docker exec --privileged -t kind-control-plane cat /kind/version]
β [control-plane] Pre-loading images π
INFO[14:38:05] Using KubeadmConfig:
apiServer:
certSANs:
- localhost
apiVersion: kubeadm.k8s.io/v1beta1
clusterName: kind
kind: ClusterConfiguration
kubernetesVersion: v1.13.3
---
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
bindPort: 6443
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
imagefs.available: 0%
nodefs.available: 0%
nodefs.inodesFree: 0%
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
DEBU[14:38:05] Running: /usr/bin/docker [docker cp /tmp/043838321 kind-control-plane:/kind/kubeadm.conf]
β [control-plane] Creating the kubeadm config file β΅
DEBU[14:38:05] Running: /usr/bin/docker [docker exec --privileged kind-control-plane kubeadm init --ignore-preflight-errors=all --config=/kind/kubeadm.conf]
ERRO[14:42:30] failed to init node with kubeadm: exit status 1 βΈ
β [control-plane] Starting Kubernetes (this may take a minute) βΈ
ERRO[14:42:30] failed to init node with kubeadm: exit status 1
DEBU[14:42:30] Running: /usr/bin/docker [docker ps -q -a --no-trunc --filter label=io.k8s.sigs.kind.cluster --format {{.Names}}\t{{.Label "io.k8s.sigs.kind.cluster"}} --filter label=io.k8s.sigs.kind.cluster=kind]
DEBU[14:42:30] Running: /usr/bin/docker [docker rm -f -v kind-control-plane]
Error: failed to create cluster: failed to init node with kubeadm: exit status 1
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 46 (43 by maintainers)
Commits related to this issue
- Fix workers name validation (#306) * Fix workers name validation * Fix regex * Print version * Revert changes * Fix message * Fix version — committed to stg-0/kind by kahun 9 months ago
Iβm seeing this same error with an older node image kindest/node:v1.11.3. From execβing into the node I can see that Docker never started up (but I have not established why). The same kind code is working great with an up-to-date node image kindest/node:v1.14.1.
I know kind is a work in progress but it would be marvellous if it could aim to support the same previous 3 minor releases as Kubernetes itself. (Weβre using kind to test kube-bench, and it is super-useful!)
Just downloaded the last version as suggested, and now it works. Thanks a lot!!
Iβve changed cgroup driver to system. It didnβt help. I still see this in the kubelet log:
Iβll try to add NO_PROXY to pkg/cluster/nodes/create.go the same way as HTTP_PROXY and HTTPS_PROXY.
When 0.3 is out (soon!) Iβll make sure we have up to date node images listed somewhere for the latest Kubernetes 1.11.10, 1.12.8, 1.13.6, 1.14.2, none of which are currently published kind images, along with some better docs about these π
Just trying to finish landing podSubnet cleanup and NO_PROXY fixes.
just checked it, thanks. Neither of those issues seem to be my case. ext4 partitions, plenty of disk space and memory, no βfailed to apply overlay networkβ message on the console, no βfailed to build imagesβ message either. See debug output above.
latest master branch contains a patch by @pablochacin that passes HTTP*_PROXY env variables to nodes: https://github.com/kubernetes-sigs/kind/pull/275/files