kind: Unable to create a cluster inside LXD container
What happened:
I have been following the instructions mentioned https://kind.sigs.k8s.io/docs/user/quick-start/ and I am unable to get the cluster created with kind
. The process to kind create cluster
fails at the level of Starting control-plane
What you expected to happen:
I expected the same outcome as described on https://kind.sigs.k8s.io/docs/user/quick-start/
How to reproduce it (as minimally and precisely as possible):
For the given environment below, i just ran the command kind create cluster --loglevel debug
and then observed the following issues:
*** Preflight verification error: ***
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.0.0-13-generic
DOCKER_VERSION: 18.06.3-ce
DOCKER_GRAPH_DRIVER: overlay2
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: [unsupported kernel release: 5.0.0-13-generic, failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod$
search_moddep() could not open moddep file '/lib/modules/5.0.0-13-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/5.0.0-13-generic\n", err: exit status 1]
However, given that pre-flight errors are ignored during that stage, i assume that the above kernel version has not been deemed to be a problem. So, subsequent problems worth noting are:
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
I0426 10:24:30.849958 752 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s in 20000 milliseconds
I0426 10:24:50.847667 752 round_trippers.go:438] GET https://172.17.0.2:6443/healthz?timeout=32s in 19497 milliseconds
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
Anything else we need to know?:
Environment:
- kind version:
0.3.0-alpha
- Docker version:
docker info
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 18.09.5
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.0.0-13-generic
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 15.54GiB
Name: tmp
ID: BZ7P:2PUL:2OIV:MWUZ:6UVN:TUWZ:I57Z:77NR:PXHV:JHD3:P5FO:6PBA
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: http://10.144.1.10:8080
HTTPS Proxy: http://10.144.1.10:8080
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: No swap limit support
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
- OS:
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 17 (10 by maintainers)
@GrosLalo it will be nice if you can explain what changes are needed so other users can benefit from your experience
@BenTheElder : The links helped. I realized that the apparmor was impacting. Configuring the container to This issue can be closed.