k3d: [BUG] Cluster fails to start on cgroup v2

What did you do

Start a minimal cluster on Kali Linux 2020.4 * How was the cluster created? * k3d cluster create

    • What did you do afterwards?

      • I inspected the error and saw it had something to do with cgroups and I noticed the latest kernel update to Kali switch the cgroup file hiearchy from v1 to v2.

What did you expect to happen

That a minimal cluster would start

Screenshots or terminal output

{"log":"time=\"2021-02-10T15:54:15.154488575Z\" level=info msg=\"Containerd is now running\"\n","stream":"stderr","time":"2021-02-10T15:54:15.154604054Z"}
{"log":"time=\"2021-02-10T15:54:15.276436029Z\" level=info msg=\"Connecting to proxy\" url=\"wss://127.0.0.1:6443/v1-k3s/connect\"\n","stream":"stderr","time":"2021-02-10T15:54:15.276584849Z"}
{"log":"time=\"2021-02-10T15:54:15.344809810Z\" level=info msg=\"Handling backend connection request [k3d-minimal-default-server-0]\"\n","stream":"stderr","time":"2021-02-10T15:54:15.344941507Z"}
{"log":"time=\"2021-02-10T15:54:15.383483103Z\" level=warning msg=\"**Disabling CPU quotas due to missing cpu.cfs_period_us**\"\n","stream":"stderr","time":"2021-02-10T15:54:15.383600244Z"}
{"log":"time=\"2021-02-10T15:54:15.383649950Z\" level=warning msg=\"**Disabling pod PIDs limit feature due to missing cgroup pids support**\"\n","stream":"stderr","time":"2021-02-10T15:54:15.383683752Z"}
{"log":"time=\"2021-02-10T15:54:15.383773636Z\" level=info msg=\"Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --cgroups-per-qos=false --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --enforce-node-allocatable= --eviction-hard=imagefs.available\u003c5%,nodefs.available\u003c5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=SupportPodPidsLimit=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-minimal-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key\"\n","stream":"stderr","time":"2021-02-10T15:54:15.383842163Z"}
{"log":"time=\"2021-02-10T15:54:15.384645964Z\" level=info msg=\"Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-minimal-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables\"\n","stream":"stderr","time":"2021-02-10T15:54:15.38471723Z"}
{"log":"Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.\n","stream":"stderr","time":"2021-02-10T15:54:15.387483943Z"}
{"log":"Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.\n","stream":"stderr","time":"2021-02-10T15:54:15.387594058Z"}
{"log":"F0210 15:54:15.387923       7 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true\n","stream":"stderr","time":"2021-02-10T15:54:15.387966646Z"}
{"log":"goroutine 3978 [running]:\n","stream":"stderr","time":"2021-02-10T15:54:15.549704084Z"}

Which OS & Architecture

 * Linux Kali 2020.4 amd64 x86

Which version of k3d

 * output of `k3d version`
   $ k3d version
   k3d version v4.2.0
   k3s version v1.20.0-k3s1 (default)

Which version of docker

 * output of `docker version` and `docker info`
   $ docker version
   Client: 
   Version:           20.10.2+dfsg1
   API version:       1.41
   Go version:        go1.15.6
   Git commit:        2291f61
   Built:            Fri Jan 8 07:08:51 2021
   OS/Arch:           linux/amd64
   Experimental:      true

Server: Engine: Version: 20.10.2+dfsg1 API version: 1.41 (minimum version 1.12) Go version: go1.15.6 Git commit: 8891c58 Built: Fri Jan 8 07:08:51 2021 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.4.3~ds1 GitCommit: 1.4.3~ds1-1+b1 runc: Version: 1.0.0-rc92+dfsgl GitCommit: 1.0.0-rc92+dfsgl-5+b1 docker-init: Version: 0.19.0 GitCommit:

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 10
  • Comments: 41 (14 by maintainers)

Commits related to this issue

Most upvoted comments

I get exactly same errors with cgroup v2. Any hint to fix it?

Using Debian Sid, in the meantime, I personally switched back to cgroup v1. I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub.

Works for me on Arch by executing grub-mkconfig -o /boot/grub/grub.cfg after adding the same to my /etc/default/grub file.

For anyone running into this on NixOS, setting systemd.enableUnifiedCgroupHierarchy = false; in your configuration.nix ought to help. (See https://github.com/NixOS/nixpkgs/issues/111835)

export K3D_FIX_CGROUPV2=1 ; k3d cluster create default -v /dev/mapper:/dev/mapper

works on Fedora 33 with Docker and cgroupv2. Great work. Thank you @iwilltry42 , @AkihiroSuda and others.

~/devel/k3d [k3d-issues-493-mj7 L|✔]$ uname -a ; k3d version ; kubectl config use-context k3d-default ; kubectl cluster-info ; kubectl get all -A
Linux mjlaptop 5.11.15-200.fc33.x86_64 #1 SMP Fri Apr 16 13:41:20 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
k3d version v4.4.3
k3s version v1.20.6-k3s1 (default)
Switched to context "k3d-default".
Kubernetes control plane is running at https://0.0.0.0:38781
CoreDNS is running at https://0.0.0.0:38781/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:38781/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
NAMESPACE     NAME                                          READY   STATUS      RESTARTS   AGE
kube-system   pod/coredns-854c77959c-8x6q9                  1/1     Running     0          9m49s
kube-system   pod/metrics-server-86cbb8457f-st8z7           1/1     Running     0          9m49s
kube-system   pod/local-path-provisioner-5ff76fc89d-qr64c   1/1     Running     0          9m49s
kube-system   pod/helm-install-traefik-2vnjc                0/1     Completed   0          9m49s
kube-system   pod/svclb-traefik-j56c2                       2/2     Running     0          9m19s
kube-system   pod/traefik-6f9cbd9bd4-hrxj2                  1/1     Running     0          9m19s

NAMESPACE     NAME                         TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
default       service/kubernetes           ClusterIP      10.43.0.1       <none>        443/TCP                      10m
kube-system   service/kube-dns             ClusterIP      10.43.0.10      <none>        53/UDP,53/TCP,9153/TCP       10m
kube-system   service/metrics-server       ClusterIP      10.43.80.104    <none>        443/TCP                      10m
kube-system   service/traefik-prometheus   ClusterIP      10.43.239.35    <none>        9100/TCP                     9m20s
kube-system   service/traefik              LoadBalancer   10.43.214.123   172.27.0.2    80:30827/TCP,443:30933/TCP   9m20s

NAMESPACE     NAME                           DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
kube-system   daemonset.apps/svclb-traefik   1         1         1       1            1           <none>          9m19s

NAMESPACE     NAME                                     READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                  1/1     1            1           10m
kube-system   deployment.apps/metrics-server           1/1     1            1           10m
kube-system   deployment.apps/local-path-provisioner   1/1     1            1           10m
kube-system   deployment.apps/traefik                  1/1     1            1           9m20s

NAMESPACE     NAME                                                DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/coredns-854c77959c                  1         1         1       9m49s
kube-system   replicaset.apps/metrics-server-86cbb8457f           1         1         1       9m49s
kube-system   replicaset.apps/local-path-provisioner-5ff76fc89d   1         1         1       9m49s
kube-system   replicaset.apps/traefik-6f9cbd9bd4                  1         1         1       9m19s

NAMESPACE     NAME                             COMPLETIONS   DURATION   AGE
kube-system   job.batch/helm-install-traefik   1/1           30s        10m
~/devel/k3d [k3d-issues-493-mj7 L|✔]$ 

Detailed logs https://github.com/mj41-gdc/k3d-debug/tree/k3d-issues-493-mj7

Hello,

I have the same issue on Arch Linux. I also have cgroup v2

あ→ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc.)

Server:
 Containers: 7
  Running: 3
  Paused: 0
  Stopped: 4
 Images: 7
 Server Version: 20.10.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runtime.v1.linux runc io.containerd.runc.v2
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 05f951a3781f4f2c1911b05e61c160e9c30eaa8e.m
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.11.11-arch1-1
 Operating System: Arch Linux
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 23.35GiB

You can give this a try now on cgroupv2: k3d cluster create test --image iwilltry42/k3s:dev-20210427.2 --verbose the image is custom but only contains the new entrypoint from https://github.com/k3s-io/k3s/pull/3237 . There’s the discussion to move this entrypoint script’s functionality into the k3s agent, so we’ll have to wait for that. iwilltry42/k3s:dev-20210427.2 is built from the current rancher/k3s:latest(sha256-17d1cc189d289649d309169f25cee5e2c2e6e25ecf5b84026c3063c6590af9c8), which is v1.21.0+k3s1.

I tested it without issues on Ubuntu 20.10 with cgroupv1 and cgroupv2 (systemd).

That’s a warning that you can ignore for now 👍 This is the line giving a hint on the restart cause: 2021-03-09T11:38:31.070373571Z F0309 11:38:31.070274 7 kubelet.go:1368] Failed to start ContainerManager cannot enter cgroupv2 "/sys/fs/cgroup/kubepods" with domain controllers -- it is in an invalid state … so cgroups again

For cgroup v2, k3s/k3d needs to have a logic to evacuate the init process from the top-level cgroup to somewhere else, like this: https://github.com/moby/moby/blob/e0170da0dc6e660594f98bc66e7a98ce9c2abb46/hack/dind#L28-L37

Issue still persist on k3d v4.4.2 with k3s v1.20.6-k3s1

Would be good if docs listed that k3d is not yet compatible with cgroupv2, so users would know in advance if they need to adjust kernel opts.

I get exactly same errors with cgroup v2. Any hint to fix it?

Using Debian Sid, in the meantime, I personally switched back to cgroup v1. I added systemd.unified_cgroup_hierarchy=0 to my GRUB_CMDLINE_LINUX_DEFAULT (/etc/default/grub) and then ran update-grub.

@iwilltry42 i’m able to confirm that this adds v2 support on my system. thank you!

I just created a (temporary) fix/workaround using the entrypoint script that we can use until it was fixed upstream (in k3s). See PR #579 . There’s a dev release out already: https://github.com/rancher/k3d/releases/tag/v4.4.3-dev.0 Please test it with the environment variable K3D_FIX_CGROUPV2=1 set to enable the workaround. Feedback welcome 😃

@iwilltry42 I confirm that image works correctly with cgroupv2 on archlinux

EDIT: I also confirm it works correctly with https://github.com/rancher/k3d/releases/tag/v4.4.3-dev.0 using env.

@iwilltry42 Yes, thanks

For ArchLinux users that now run systemd v248+ and use systemd-boot here’s how I fixed it for my system: vim /boot/loader/entries/arch.conf

...
-options	root=/dev/mapper/root
+options	root=/dev/mapper/root systemd.unified_cgroup_hierarchy=0

Then verified with ls /sys/fs/cgroup to see if there’s a blkio/ folder among others again. As described by https://wiki.archlinux.org/index.php/cgroups#Switching_to_cgroups_v2

Seems like

k3d cluster create --verbose --trace --timestamps -v /dev/mapper:/dev/mapper --image rancher/k3s:v1.20.4-k3s1 wso

started but keeps restarting for some reason.

~ $ docker ps && echo && kubectl get all -A
CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS                            PORTS                             NAMES
cef7e29af72d   rancher/k3d-proxy:v4.2.0   "/bin/sh -c nginx-pr…"   9 minutes ago   Up 9 minutes                      80/tcp, 0.0.0.0:37461->6443/tcp   k3d-wsop-serverlb
5c0ee211ec2e   rancher/k3s:v1.20.4-k3s1   "/bin/k3s server --t…"   9 minutes ago   Restarting (255) 57 seconds ago                                     k3d-wsop-server-0

NAMESPACE     NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes       ClusterIP   10.43.0.1      <none>        443/TCP                  9m54s
kube-system   service/kube-dns         ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP   9m52s
kube-system   service/metrics-server   ClusterIP   10.43.217.47   <none>        443/TCP                  9m52s
/sys/fs/cgroup $ docker logs --follow k3d-wsop-server-0 2>&1 | grep -i error -B 4 | tail -n 25
--
time="2021-03-03T16:19:44.510502112Z" level=info msg="Module overlay was already loaded"
time="2021-03-03T16:19:44.510562036Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-03-03T16:19:44.510584246Z" level=info msg="Module br_netfilter was already loaded"
time="2021-03-03T16:19:44.510603312Z" level=info msg="Module iptable_nat was already loaded"
time="2021-03-03T16:19:44.532234242Z" level=info msg="Cluster-Http-Server 2021/03/03 16:19:44 http: TLS handshake error from 127.0.0.1:55982: remote error: tls: bad certificate"
time="2021-03-03T16:19:44.539567218Z" level=info msg="Cluster-Http-Server 2021/03/03 16:19:44 http: TLS handshake error from 127.0.0.1:55988: remote error: tls: bad certificate"
--
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
W0303 16:19:45.632310       7 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
I0303 16:19:45.632948       7 server.go:412] Version: v1.20.4+k3s1
W0303 16:19:45.633098       7 proxier.go:651] Failed to read file /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin with error open /lib/modules/5.10.19-200.fc33.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
--
I0303 16:19:47.130403       7 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0303 16:19:47.137979       7 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
time="2021-03-03T16:19:47.651998207Z" level=info msg="Waiting for node k3d-wsop-server-0 CIDR not assigned yet"
W0303 16:19:47.760260       7 handler_proxy.go:102] no RequestInfo found in the context
E0303 16:19:47.760396       7 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
--
I0303 16:19:49.914319       7 request.go:655] Throttling request took 1.047944521s, request: GET:https://127.0.0.1:6444/apis/k3s.cattle.io/v1?timeout=32s
time="2021-03-03T16:19:50.629767889Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2021-03-03T16:19:50.629843807Z" level=info msg="Connecting to proxy" url="wss://172.28.0.2:6443/v1-k3s/connect"
time="2021-03-03T16:19:50.629865458Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-03-03T16:19:50.630026513Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"

Same issue on Fedora 33:

❯ k3d version
k3d version v4.2.0
k3s version v1.20.2-k3s1 (default)

❯ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 4
 Server Version: 20.10.3
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.10.16-200.fc33.x86_64
 Operating System: Fedora 33 (Cloud Edition)
 OSType: linux
 Architecture: x86_64
 CPUs: 4
 Total Memory: 15.34GiB
 Name: ip-172-31-15-82.us-west-2.compute.internal
 ID: J7KD:DU6M:ESY2:7Z7C:JQF4:4DDA:PN4V:YAH3:RGYS:YDRC:LFCG:SHGR
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No kernel memory TCP limit support
WARNING: No oom kill disable support
WARNING: Support for cgroup v2 is experimental

Logs:

time="2021-02-24T01:09:17.112114425Z" level=info msg="Starting k3s v1.20.2+k3s1 (1d4adb03)"
time="2021-02-24T01:09:17.119709683Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2021-02-24T01:09:17.119758494Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2021-02-24T01:09:17.123875742Z" level=info msg="Database tables and indexes are up to date"
time="2021-02-24T01:09:17.125089386Z" level=info msg="Kine listening on unix://kine.sock"
time="2021-02-24T01:09:17.140581290Z" level=info msg="certificate CN=system:admin,O=system:masters signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.141329140Z" level=info msg="certificate CN=system:kube-controller-manager signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.142008240Z" level=info msg="certificate CN=system:kube-scheduler signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.142748155Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.143466897Z" level=info msg="certificate CN=system:kube-proxy signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.144108543Z" level=info msg="certificate CN=system:k3s-controller signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.144808998Z" level=info msg="certificate CN=cloud-controller-manager signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.145932675Z" level=info msg="certificate CN=kube-apiserver signed by CN=k3s-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.147053204Z" level=info msg="certificate CN=system:auth-proxy signed by CN=k3s-request-header-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.148192915Z" level=info msg="certificate CN=etcd-server signed by CN=etcd-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.148832224Z" level=info msg="certificate CN=etcd-client signed by CN=etcd-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.149941869Z" level=info msg="certificate CN=etcd-peer signed by CN=etcd-peer-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.504108606Z" level=info msg="certificate CN=k3s,O=k3s signed by CN=k3s-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:17 +0000 UTC"
time="2021-02-24T01:09:17.504625136Z" level=info msg="Active TLS secret  (ver=) (count 8): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-172.18.0.2:172.18.0.2 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=63817433C7020D7097F94041647F7EF794694F36]"
time="2021-02-24T01:09:17.508648371Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --feature-gates=ServiceAccountIssuerDiscovery=false --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0224 01:09:17.509687       7 server.go:659] external host was not specified, using 172.18.0.2
I0224 01:09:17.509892       7 server.go:196] Version: v1.20.2+k3s1
I0224 01:09:17.925297       7 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0224 01:09:17.926455       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0224 01:09:17.926469       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0224 01:09:17.927652       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0224 01:09:17.927668       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0224 01:09:17.955930       7 instance.go:289] Using reconciler: lease
I0224 01:09:17.994318       7 rest.go:131] the default service ipfamily for this cluster is: IPv4
W0224 01:09:18.301788       7 genericapiserver.go:419] Skipping API batch/v2alpha1 because it has no resources.
W0224 01:09:18.310771       7 genericapiserver.go:419] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.320194       7 genericapiserver.go:419] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.328529       7 genericapiserver.go:419] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.332445       7 genericapiserver.go:419] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.338364       7 genericapiserver.go:419] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.341107       7 genericapiserver.go:419] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0224 01:09:18.346281       7 genericapiserver.go:419] Skipping API apps/v1beta2 because it has no resources.
W0224 01:09:18.346301       7 genericapiserver.go:419] Skipping API apps/v1beta1 because it has no resources.
I0224 01:09:18.355798       7 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0224 01:09:18.355818       7 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
time="2021-02-24T01:09:18.365852269Z" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
time="2021-02-24T01:09:18.365885385Z" level=info msg="Waiting for API server to become available"
time="2021-02-24T01:09:18.366717179Z" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2021-02-24T01:09:18.377087749Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2021-02-24T01:09:18.377374699Z" level=info msg="To join node to cluster: k3s agent -s https://172.18.0.2:6443 -t ${NODE_TOKEN}"
time="2021-02-24T01:09:18.378913604Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2021-02-24T01:09:18.379385488Z" level=info msg="Run: k3s kubectl"
time="2021-02-24T01:09:18.379733584Z" level=info msg="Module overlay was already loaded"
time="2021-02-24T01:09:18.379831478Z" level=info msg="Module nf_conntrack was already loaded"
time="2021-02-24T01:09:18.379908677Z" level=info msg="Module br_netfilter was already loaded"
time="2021-02-24T01:09:18.380027128Z" level=info msg="Module iptable_nat was already loaded"
time="2021-02-24T01:09:18.407443193Z" level=info msg="Cluster-Http-Server 2021/02/24 01:09:18 http: TLS handshake error from 127.0.0.1:34152: remote error: tls: bad certificate"
time="2021-02-24T01:09:18.412452420Z" level=info msg="Cluster-Http-Server 2021/02/24 01:09:18 http: TLS handshake error from 127.0.0.1:34158: remote error: tls: bad certificate"
time="2021-02-24T01:09:18.431100320Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:18 +0000 UTC"
time="2021-02-24T01:09:18.450385357Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1614128957: notBefore=2021-02-24 01:09:17 +0000 UTC notAfter=2022-02-24 01:09:18 +0000 UTC"
time="2021-02-24T01:09:18.506694294Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2021-02-24T01:09:18.506916025Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
time="2021-02-24T01:09:19.509429187Z" level=info msg="Containerd is now running"
time="2021-02-24T01:09:19.521239706Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2021-02-24T01:09:19.523780251Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
time="2021-02-24T01:09:19.524892860Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
time="2021-02-24T01:09:19.524916381Z" level=warning msg="Disabling pod PIDs limit feature due to missing cgroup pids support"
time="2021-02-24T01:09:19.524972024Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --cgroups-per-qos=false --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --container-runtime=remote --containerd=unix:///run/k3s/containerd/containerd.sock --cpu-cfs-quota=false --enforce-node-allocatable= --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=SupportPodPidsLimit=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/tmp/k3s-resolv.conf --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
time="2021-02-24T01:09:19.525664896Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
W0224 01:09:19.528187       7 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0224 01:09:19.528712       7 proxier.go:651] Failed to read file /lib/modules/5.10.16-200.fc33.x86_64/modules.builtin with error open /lib/modules/5.10.16-200.fc33.x86_64/modules.builtin: no such file or directory. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.529343       7 proxier.go:661] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.529826       7 proxier.go:661] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.530268       7 proxier.go:661] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.530671       7 proxier.go:661] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0224 01:09:19.531084       7 proxier.go:661] Failed to load kernel module nf_conntrack with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
F0224 01:09:19.532466       7 server.go:181] cannot set feature gate SupportPodPidsLimit to false, feature is locked to true