kind: While starting local cluster with extraMounts option I got [ FailedScheduling, NodeNotReady, FailedMount ]

Firstly, please accept my apologies for the title, feel free to rename it to make it more understandable.

What happened: When I start the cluster with a mount the node is not starting.

What you expected to happen: Everything is fine, works as it supposed to be

How to reproduce it (as minimally and precisely as possible): I start the cluster with the following script:

start-cluster.sh
#!/bin/sh
set -o errexit

# create registry container unless it already exists
reg_name='kind-registry'
reg_port='5001'
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
  docker run \
    -d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \
    registry:2
fi

# create a cluster with the local registry enabled in containerd
cat <<EOF | kind create cluster \
--image kindest/node:v1.19.11 \
--config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
containerdConfigPatches:
- |-
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"]
    endpoint = ["http://${reg_name}:5000"]
nodes:
  - role: control-plane
    extraMounts:
      - hostPath: /Users/itodorenko/programming/work/projects/airflow_dags/
        containerPath: /data_dags
EOF

# connect the registry to the cluster network if not already connected
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
  docker network connect "kind" "${reg_name}"
fi

# Document the local registry
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
  name: local-registry-hosting
  namespace: kube-public
data:
  localRegistryHosting.v1: |
    host: "localhost:${reg_port}"
    help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
EOF

It’s pretty much the same as https://kind.sigs.k8s.io/docs/user/local-registry/ expect:

  • add image kindest/node:v1.19.11
  • add mount:
    nodes:
    - role: control-plane
        extraMounts:
        - hostPath: /Users/itodorenko/programming/work/projects/airflow_dags/
            containerPath: /data_dags
    

Then I run a command to get all the events from starting k8s cluster

kubectl get events --sort-by='.metadata.creationTimestamp' -A
>>> kubectl get events --sort-by='.metadata.creationTimestamp' -A
NAMESPACE            LAST SEEN   TYPE      REASON                    OBJECT                                         MESSAGE
default              3m23s       Normal    NodeHasSufficientMemory   node/kind-control-plane                        Node kind-control-plane status is now: NodeHasSufficientMemory
default              3m23s       Normal    NodeHasNoDiskPressure     node/kind-control-plane                        Node kind-control-plane status is now: NodeHasNoDiskPressure
default              3m23s       Normal    NodeHasSufficientPID      node/kind-control-plane                        Node kind-control-plane status is now: NodeHasSufficientPID
kube-system          3m11s       Normal    LeaderElection            lease/kube-controller-manager                  kind-control-plane_0fa39692-52c9-48f8-9202-a84252ec87a3 became leader
kube-system          3m11s       Normal    LeaderElection            endpoints/kube-controller-manager              kind-control-plane_0fa39692-52c9-48f8-9202-a84252ec87a3 became leader
kube-system          3m11s       Normal    LeaderElection            endpoints/kube-scheduler                       kind-control-plane_6812dd86-9351-4f8f-ad1e-bba5f7c6737b became leader
kube-system          3m11s       Normal    LeaderElection            lease/kube-scheduler                           kind-control-plane_6812dd86-9351-4f8f-ad1e-bba5f7c6737b became leader
default              3m7s        Normal    NodeAllocatableEnforced   node/kind-control-plane                        Updated Node Allocatable limit across pods
default              3m7s        Normal    NodeHasSufficientPID      node/kind-control-plane                        Node kind-control-plane status is now: NodeHasSufficientPID
default              3m7s        Normal    Starting                  node/kind-control-plane                        Starting kubelet.
default              3m7s        Normal    NodeHasSufficientMemory   node/kind-control-plane                        Node kind-control-plane status is now: NodeHasSufficientMemory
default              3m7s        Normal    NodeHasNoDiskPressure     node/kind-control-plane                        Node kind-control-plane status is now: NodeHasNoDiskPressure
kube-system          2m55s       Normal    SuccessfulCreate          daemonset/kindnet                              Created pod: kindnet-kthxr
local-path-storage   2m52s       Warning   FailedScheduling          pod/local-path-provisioner-547f784dff-slfdq    0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
kube-system          2m55s       Normal    SuccessfulCreate          daemonset/kube-proxy                           Created pod: kube-proxy-46mlh
kube-system          2m47s       Warning   FailedScheduling          pod/coredns-f9fd979d6-9mw9x                    0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
kube-system          2m47s       Warning   FailedScheduling          pod/coredns-f9fd979d6-drvv6                    0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
kube-system          2m55s       Normal    Pulled                    pod/kube-proxy-46mlh                           Container image "k8s.gcr.io/kube-proxy:v1.19.11" already present on machine
local-path-storage   2m55s       Normal    SuccessfulCreate          replicaset/local-path-provisioner-547f784dff   Created pod: local-path-provisioner-547f784dff-slfdq
kube-system          2m55s       Normal    Scheduled                 pod/kube-proxy-46mlh                           Successfully assigned kube-system/kube-proxy-46mlh to kind-control-plane
local-path-storage   2m55s       Normal    ScalingReplicaSet         deployment/local-path-provisioner              Scaled up replica set local-path-provisioner-547f784dff to 1
kube-system          2m55s       Normal    SuccessfulCreate          replicaset/coredns-f9fd979d6                   Created pod: coredns-f9fd979d6-drvv6
kube-system          2m55s       Normal    SuccessfulCreate          replicaset/coredns-f9fd979d6                   Created pod: coredns-f9fd979d6-9mw9x
kube-system          2m55s       Normal    ScalingReplicaSet         deployment/coredns                             Scaled up replica set coredns-f9fd979d6 to 2
default              2m55s       Normal    RegisteredNode            node/kind-control-plane                        Node kind-control-plane event: Registered Node kind-control-plane in Controller
kube-system          2m55s       Warning   NodeNotReady              pod/kube-apiserver-kind-control-plane          Node is not ready
kube-system          2m55s       Normal    Scheduled                 pod/kindnet-kthxr                              Successfully assigned kube-system/kindnet-kthxr to kind-control-plane
kube-system          2m54s       Normal    Pulled                    pod/kindnet-kthxr                              Container image "docker.io/kindest/kindnetd:v20210326-1e038dc5" already present on machine
kube-system          2m53s       Normal    Started                   pod/kindnet-kthxr                              Started container kindnet-cni
kube-system          2m53s       Normal    Created                   pod/kindnet-kthxr                              Created container kindnet-cni
default              2m53s       Normal    Starting                  node/kind-control-plane                        Starting kube-proxy.
kube-system          2m53s       Normal    Created                   pod/kube-proxy-46mlh                           Created container kube-proxy
kube-system          2m53s       Normal    Started                   pod/kube-proxy-46mlh                           Started container kube-proxy
default              2m47s       Normal    NodeReady                 node/kind-control-plane                        Node kind-control-plane status is now: NodeReady
local-path-storage   2m47s       Normal    Scheduled                 pod/local-path-provisioner-547f784dff-slfdq    Successfully assigned local-path-storage/local-path-provisioner-547f784dff-slfdq to kind-control-plane
local-path-storage   2m45s       Normal    Pulled                    pod/local-path-provisioner-547f784dff-slfdq    Container image "docker.io/rancher/local-path-provisioner:v0.0.14" already present on machine
local-path-storage   2m45s       Warning   FailedMount               pod/local-path-provisioner-547f784dff-slfdq    MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition
local-path-storage   2m45s       Warning   FailedMount               pod/local-path-provisioner-547f784dff-slfdq    MountVolume.SetUp failed for volume "local-path-provisioner-service-account-token-25fx4" : failed to sync secret cache: timed out waiting for the condition
local-path-storage   2m44s       Normal    Created                   pod/local-path-provisioner-547f784dff-slfdq    Created container local-path-provisioner
local-path-storage   2m44s       Normal    Started                   pod/local-path-provisioner-547f784dff-slfdq    Started container local-path-provisioner
local-path-storage   2m44s       Normal    LeaderElection            endpoints/rancher.io-local-path                local-path-provisioner-547f784dff-slfdq_1e7f54c7-559f-44de-a427-da4b7d52f826 became leader
kube-system          2m42s       Normal    Scheduled                 pod/coredns-f9fd979d6-drvv6                    Successfully assigned kube-system/coredns-f9fd979d6-drvv6 to kind-control-plane
kube-system          2m42s       Normal    Scheduled                 pod/coredns-f9fd979d6-9mw9x                    Successfully assigned kube-system/coredns-f9fd979d6-9mw9x to kind-control-plane
kube-system          2m41s       Normal    Created                   pod/coredns-f9fd979d6-drvv6                    Created container coredns
kube-system          2m41s       Normal    Started                   pod/coredns-f9fd979d6-9mw9x                    Started container coredns
kube-system          2m41s       Normal    Pulled                    pod/coredns-f9fd979d6-9mw9x                    Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
kube-system          2m41s       Normal    Started                   pod/coredns-f9fd979d6-drvv6                    Started container coredns
kube-system          2m41s       Normal    Pulled                    pod/coredns-f9fd979d6-drvv6                    Container image "k8s.gcr.io/coredns:1.7.0" already present on machine
kube-system          2m41s       Normal    Created                   pod/coredns-f9fd979d6-9mw9x                    Created container coredns

Here I noticed a few warnings:

local-path-storage   2m52s       Warning   FailedScheduling          pod/local-path-provisioner-547f784dff-slfdq    0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
kube-system          2m47s       Warning   FailedScheduling          pod/coredns-f9fd979d6-9mw9x                    0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
kube-system          2m47s       Warning   FailedScheduling          pod/coredns-f9fd979d6-drvv6                    0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
kube-system          2m55s       Warning   NodeNotReady              pod/kube-apiserver-kind-control-plane          Node is not ready
local-path-storage   2m45s       Warning   FailedMount               pod/local-path-provisioner-547f784dff-slfdq    MountVolume.SetUp failed for volume "config-volume" : failed to sync configmap cache: timed out waiting for the condition
local-path-storage   2m45s       Warning   FailedMount               pod/local-path-provisioner-547f784dff-slfdq    MountVolume.SetUp failed for volume "local-path-provisioner-service-account-token-25fx4" : failed to sync secret cache: timed out waiting for the condition

I don’t understand what might be the problem with mount:

>>> ll /Users/itodorenko/programming/work/projects/airflow_dags/
total 24
-rw-r--r--   1 itodorenko  staff   692B Mar  3 18:09 README.md
-rw-r--r--   1 itodorenko  staff     0B Feb 10 12:09 __init__.py
drwxr-xr-x  27 itodorenko  staff   864B Mar  2 20:20 data_dags
-rw-r--r--   1 itodorenko  staff   521B Mar  7 16:18 main.py
-rw-r--r--   1 itodorenko  staff   202B Mar  7 16:27 requirements.txt
drwxr-xr-x   7 itodorenko  staff   224B Feb 14 12:39 venv

Environment:

>>> kind version
kind v0.11.1 go1.16.4 darwin/amd64

>>> kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.7", GitCommit:"1dd5338295409edcfff11505e7bb246f0d325d15", GitTreeState:"clean", BuildDate:"2021-01-13T13:23:52Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.11", GitCommit:"c6a2f08fc4378c5381dd948d9ad9d1080e3e6b33", GitTreeState:"clean", BuildDate:"2021-05-27T23:47:11Z", GoVersion:"go1.15.12", Compiler:"gc", Platform:"linux/amd64"}

>>> docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc., v0.7.1)
  compose: Docker Compose (Docker Inc., v2.2.3)
  scan: Docker Scan (Docker Inc., v0.17.0)

Server:
 Containers: 2
  Running: 2
  Paused: 0
  Stopped: 0
 Images: 4
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.10.76-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 7.774GiB
 Name: docker-desktop
 ID: MY35:G6TD:KDL6:GDDY:AUUK:UW2O:N3SG:HRJ7:PDZR:5YWP:L3LB:4CAR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 HTTP Proxy: http.docker.internal:3128
 HTTPS Proxy: http.docker.internal:3128
 No Proxy: hubproxy.docker.internal
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  hubproxy.docker.internal:5000
  127.0.0.0/8
 Live Restore Enabled: false

>>> sw_vers
ProductName:	macOS
ProductVersion:	12.2.1
BuildVersion:	21D62

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 20 (20 by maintainers)

Commits related to this issue

Most upvoted comments

I’m referring to the kind binary v0.12.0 which is newer than the version you are using, not the node image which you are using unpinned at v1.19.11 which will now be the image we published for v0.12.0

I will check these logs when I get a chance (on mobile atm)