kind: kind does not work on orbstack < 0.7.0
# kind create cluster --config cluster-v1.25-2nodes.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cluster-v1.25
networking:
ipFamily: ipv4 # macos only supported v4
kubeProxyMode: "ipvs"
# the default CNI will not be installed
#disableDefaultCNI: true
nodes:
- role: control-plane
image: kindest/node:v1.25.8@sha256:00d3f5314cc35327706776e95b2f8e504198ce59ac545d0200a89e69fce10b7f
labels:
role: master
# kubeadmConfigPatches:
# - |
# kind: InitConfiguration
# nodeRegistration:
# kubeletExtraArgs:
# # system-reserved: cpu=4
# system-reserved: memory=8Gi
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: udp # Optional, defaults to tcp
- role: worker
image: kindest/node:v1.25.8@sha256:00d3f5314cc35327706776e95b2f8e504198ce59ac545d0200a89e69fce10b7f
labels:
role: worker
# kubeadmConfigPatches:
# - |
# kind: JoinConfiguration
# nodeRegistration:
# kubeletExtraArgs:
# # system-reserved: cpu=4
# system-reserved: memory=8Gi
What happened:
I0416 09:44:09.773479 132 round_trippers.go:553] GET https://cluster-v1.25-control-plane:6443/healthz?timeout=10s in 0 milliseconds
I0416 09:44:10.278733 132 round_trippers.go:553] GET https://cluster-v1.25-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0416 09:44:10.774627 132 round_trippers.go:553] GET https://cluster-v1.25-control-plane:6443/healthz?timeout=10s in 1 milliseconds
I0416 09:44:11.272848 132 round_trippers.go:553] GET https://cluster-v1.25-control-plane:6443/healthz?timeout=10s in 0 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
What you expected to happen:
Creating cluster "cluster-v1.25" ...
β Ensuring node image (kindest/node:v1.25.8) πΌ
β Preparing nodes π¦ π¦
β Writing configuration π
β Starting control-plane πΉοΈ
β Installing CNI π
β Installing StorageClass πΎ
β Joining worker nodes π
Set kubectl context to "kind-cluster-v1.25"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster-v1.25
Thanks for using kind! π
How to reproduce it (as minimally and precisely as possible):
use the yaml above and uncomment the kubeadmConfigPatches
config, then create cluster with command kind create cluster --config cluster-v1.25-2nodes.yaml
failed.
# kind create cluster --config cluster-v1.25-2nodes.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cluster-v1.25
networking:
ipFamily: ipv4 # macos only supported v4
kubeProxyMode: "ipvs"
# the default CNI will not be installed
#disableDefaultCNI: true
nodes:
- role: control-plane
image: kindest/node:v1.25.8@sha256:00d3f5314cc35327706776e95b2f8e504198ce59ac545d0200a89e69fce10b7f
labels:
role: master
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
# system-reserved: cpu=4
system-reserved: memory=8Gi
extraPortMappings:
- containerPort: 80
hostPort: 80
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: udp # Optional, defaults to tcp
- role: worker
image: kindest/node:v1.25.8@sha256:00d3f5314cc35327706776e95b2f8e504198ce59ac545d0200a89e69fce10b7f
labels:
role: worker
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
# system-reserved: cpu=4
system-reserved: memory=8Gi
Anything else we need to know?:
Environment:
- kind version: (use
kind version
): kind v0.18.0 go1.20.2 darwin/arm64 - Runtime info: (use
docker info
orpodman info
): orbstack
Client:
Context: orbstack
Debug Mode: false
Plugins:
buildx: Docker Buildx (Docker Inc.)
Version: v0.10.4
Path: /Users/xxx/.docker/cli-plugins/docker-buildx
compose: Docker Compose (Docker Inc.)
Version: v2.17.2
Path: /Users/xxx/.docker/cli-plugins/docker-compose
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 7
Server Version: 23.0.3
Storage Driver: overlay2
Backing Filesystem: btrfs
Supports d_type: true
Using metacopy: false
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 2
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 1fbd70374134b891f97ce19c70b6e50c7b9f4e0d
runc version: f19387a6bec4944c770f7668ab51c4348d9c2f38
init version:
Security Options:
seccomp
Profile: builtin
cgroupns
Kernel Version: 6.1.23-orbstack-00113-g6614aaccb205-dirty
Operating System: Alpine Linux edge (containerized)
OSType: linux
Architecture: aarch64
CPUs: 8
Total Memory: 5.135GiB
Name: docker
ID: 4d5e9c90-0b41-4c15-a9ca-af8c41fdd855
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
- OS (e.g. from
/etc/os-release
): Darwin 192.168.0.102 21.6.0 Darwin Kernel Version 21.6.0: Wed Aug 10 14:28:25 PDT 2022; root:xnu-8020.141.5~2/RELEASE_ARM64_T8110 arm64 - Kubernetes version: (use
kubectl version
): Client Version: v1.27.1, Server Version: v1.25.8, GitCommit:β0ce7342c984110dfc93657d64df5dc3b2c0d1fe9β - Any proxies or other special environment settings?:
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 31 (14 by maintainers)
Weβve added IPVS support in OrbStack v0.8.0.
OrbStack developer here β really sorry for the trouble. This issue is caused by missing support for
CONFIG_NETFILTER_XT_MATCH_STATISTIC
in our kernel config, which is more minimal than usual because weβre still relatively new and still building up a baseline set of options to cover all common use cases. Weβve already fixed it and enabled the necessary modules in v0.7.0 (released just over a day ago), so updating OrbStack should fix the issue.Iβve tested the
cluster-v1.25-2nodes.yaml
config with OrbStack v0.7.1 and it seems to work fine:Let me know if you have any other concerns. Hope this helps!