k3s: Cannot start K3s with docker container runtime on centos7.8

Environmental Info: K3s Version: v1.19.3+k3s3 (0e4fbfef)

Node(s) CPU architecture, OS, and Version: amd64, Centos 7.8 (Using the same steps, it works fine on centos7.6)

Linux iZ6weix7w7e0sy67ak2vt0Z 3.10.0-1127.19.1.el7.x86_64 rancher/k3s#1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration: 1 master

Describe the bug:

Cannot start K3s with docker container runtime on centos7.8

Steps To Reproduce:

Expected behavior:

Expect k3s to start successfully Actual behavior:

k3s failed to start

[root@iZ6weix7w7e0sy67ak2vt0Z ~]# kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?

Additional context / logs:

Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z systemd: k3s.service holdoff time over, scheduling restart.
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.950638268+08:00" level=info msg="Starting k3s v1.19.3+k3s3 (0e4fbfef)"
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.951202912+08:00" level=info msg="Cluster bootstrap already complete"
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.962970507+08:00" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.963001178+08:00" level=info msg="Configuring database table schema and indexes, this may take a moment..."
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.963089752+08:00" level=info msg="Database tables and indexes are up to date"
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.963862502+08:00" level=info msg="Kine listening on unix://kine.sock"
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:04.963954470+08:00" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=unknown --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=unix://kine.sock --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.964869    5531 server.go:652] external host was not specified, using 172.16.0.85
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.965089    5531 server.go:177] Version: v1.19.3+k3s3
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.972739    5531 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.972757    5531 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.975600    5531 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.975614    5531 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Nov 18 20:31:04 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:04.992685    5531 master.go:271] Using reconciler: lease
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.298234    5531 genericapiserver.go:412] Skipping API batch/v2alpha1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.311365    5531 genericapiserver.go:412] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.325280    5531 genericapiserver.go:412] Skipping API node.k8s.io/v1alpha1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.341164    5531 genericapiserver.go:412] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.344341    5531 genericapiserver.go:412] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.358712    5531 genericapiserver.go:412] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.399779    5531 genericapiserver.go:412] Skipping API apps/v1beta2 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.399798    5531 genericapiserver.go:412] Skipping API apps/v1beta1 because it has no resources.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.411350    5531 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.411368    5531 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.419323096+08:00" level=info msg="Running kube-scheduler --address=127.0.0.1 --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --port=10251 --profiling=false --secure-port=0"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.419627    5531 registry.go:173] Registering SelectorSpread plugin
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.419641    5531 registry.go:173] Registering SelectorSpread plugin
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.419963515+08:00" level=info msg="Running kube-controller-manager --address=127.0.0.1 --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --port=10252 --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.421563967+08:00" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.421590659+08:00" level=info msg="To join node to cluster: k3s agent -s https://172.16.0.85:6443 -t ${NODE_TOKEN}"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.422972956+08:00" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.422997161+08:00" level=info msg="Run: k3s kubectl"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.423075240+08:00" level=info msg="Module overlay was already loaded"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.423089943+08:00" level=info msg="Module nf_conntrack was already loaded"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.423103849+08:00" level=info msg="Module br_netfilter was already loaded"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.423116579+08:00" level=info msg="Module iptable_nat was already loaded"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.423504108+08:00" level=info msg="Waiting for API server to become available"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.450236582+08:00" level=info msg="Cluster-Http-Server 2020/11/18 20:31:05 http: TLS handshake error from 127.0.0.1:52502: remote error: tls: bad certificate"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.453852175+08:00" level=info msg="Cluster-Http-Server 2020/11/18 20:31:05 http: TLS handshake error from 127.0.0.1:52512: remote error: tls: bad certificate"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.460236221+08:00" level=info msg="certificate CN=iz6weix7w7e0sy67ak2vt0z signed by CN=k3s-server-ca@1605702556: notBefore=2020-11-18 12:29:16 +0000 UTC notAfter=2021-11-18 12:31:05 +0000 UTC"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.462631766+08:00" level=info msg="certificate CN=system:node:iz6weix7w7e0sy67ak2vt0z,O=system:nodes signed by CN=k3s-client-ca@1605702556: notBefore=2020-11-18 12:29:16 +0000 UTC notAfter=2021-11-18 12:31:05 +0000 UTC"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.470206249+08:00" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.471715591+08:00" level=info msg="Handling backend connection request [iz6weix7w7e0sy67ak2vt0z]"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.472080499+08:00" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.472135278+08:00" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/var/lib/rancher/k3s/data/223e6420f8db0d8828a8f5ed3c44489bb8eb47aa71485404f8af8c462a29bea3/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --cpu-cfs-quota=false --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --healthz-bind-address=127.0.0.1 --hostname-override=iz6weix7w7e0sy67ak2vt0z --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/systemd/system.slice --network-plugin=cni --node-labels= --pod-infra-container-image=docker.io/rancher/pause:3.1 --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/systemd/system.slice --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.473352064+08:00" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --healthz-bind-address=127.0.0.1 --hostname-override=iz6weix7w7e0sy67ak2vt0z --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: Flag --cloud-provider has been deprecated, will be removed in 1.23, in favor of removing cloud provider code from Kubelet.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.482389    5531 server.go:407] Version: v1.19.3+k3s3
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.485429    5531 server.go:226] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.488553    5531 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.495183391+08:00" level=info msg="Waiting for node iz6weix7w7e0sy67ak2vt0z: nodes \"iz6weix7w7e0sy67ak2vt0z\" not found"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.495275    5531 node.go:125] Failed to retrieve node info: nodes "iz6weix7w7e0sy67ak2vt0z" is forbidden: User "system:kube-proxy" cannot get resource "nodes" in API group "" at the cluster scope
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.585590    5531 nvidia.go:61] NVIDIA GPU metrics will not be available: no NVIDIA devices found
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.622626    5531 server.go:640] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.622876    5531 container_manager_linux.go:289] container manager verified user specified cgroup-root exists: []
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.622888    5531 container_manager_linux.go:294] Creating Container Manager object based on Node Config: {RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: KubeletCgroupsName:/systemd/system.slice ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none Rootless:false}
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.622973    5531 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.622980    5531 container_manager_linux.go:324] [topologymanager] Initializing Topology Manager with none policy
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.622984    5531 container_manager_linux.go:329] Creating device plugin manager: true
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.623053    5531 client.go:77] Connecting to docker on unix:///var/run/docker.sock
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.623063    5531 client.go:94] Start docker client with request timeout=2m0s
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.634650    5531 docker_service.go:565] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.634672    5531 docker_service.go:241] Hairpin mode set to "hairpin-veth"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.658371    5531 docker_service.go:256] Docker cri networking managed by cni
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.669527    5531 docker_service.go:261] Docker Info: &{ID:O34P:FQNT:EZBC:VKNS:GUIJ:U4J6:HHMB:NAI3:4AC5:AVJ4:GDLM:F3DI Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:35 SystemTime:2020-11-18T20:31:05.659464651+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1127.19.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc009b2b960 NCPU:2 MemTotal:1818824704 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:iZ6weix7w7e0sy67ak2vt0Z Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fba4e9a7d01810a393d5d25a3621dc101981175 Expected:8fba4e9a7d01810a393d5d25a3621dc101981175} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default]}
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.669606    5531 docker_service.go:274] Setting cgroupDriver to cgroupfs
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.685004    5531 kubelet.go:261] Adding pod path: /var/lib/rancher/k3s/agent/pod-manifests
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.685046    5531 kubelet.go:273] Watching apiserver
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.721585    5531 kuberuntime_manager.go:214] Container runtime docker initialized, version: 19.03.13, apiVersion: 1.40.0
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.721868    5531 server.go:1148] Started kubelet
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.724479    5531 kubelet.go:1218] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.724981    5531 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.728914    5531 server.go:152] Starting to listen on 0.0.0.0:10250
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.729462    5531 server.go:424] Adding debug handlers to kubelet server.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.731997    5531 volume_manager.go:265] Starting Kubelet Volume Manager
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.734936    5531 desired_state_of_world_populator.go:139] Desired state populator starts to run
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.767085    5531 status_manager.go:158] Starting to sync pod status with apiserver
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.767109    5531 kubelet.go:1741] Starting kubelet main sync loop.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.767138    5531 kubelet.go:1765] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.789444    5531 controller.go:228] failed to get node "iz6weix7w7e0sy67ak2vt0z" when trying to set owner ref to the node lease: nodes "iz6weix7w7e0sy67ak2vt0z" not found
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.868422    5531 kubelet.go:1765] skipping pod synchronization - container runtime status check may not have completed yet
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.896006    5531 kubelet.go:2183] node "iz6weix7w7e0sy67ak2vt0z" not found
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.960211    5531 kubelet_node_status.go:70] Attempting to register node iz6weix7w7e0sy67ak2vt0z
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.972573    5531 cpu_manager.go:184] [cpumanager] starting with none policy
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.972585    5531 cpu_manager.go:185] [cpumanager] reconciling every 10s
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.972609    5531 state_mem.go:36] [cpumanager] initializing new in-memory state store
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.972753    5531 state_mem.go:88] [cpumanager] updated default cpuset: ""
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.972761    5531 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.972775    5531 policy_none.go:43] [cpumanager] none policy: Start
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: W1118 20:31:05.973305    5531 manager.go:596] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.973470    5531 plugin_manager.go:114] Starting Kubelet Plugin Manager
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: E1118 20:31:05.974601    5531 eviction_manager.go:260] eviction manager: failed to get summary stats: failed to get node info: node "iz6weix7w7e0sy67ak2vt0z" not found
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.988515514+08:00" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.988746208+08:00" level=info msg="Kube API server is now running"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.988764034+08:00" level=info msg="k3s is up and running"
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: Flag --address has been deprecated, see --bind-address instead.
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.991655    5531 controllermanager.go:175] Version: v1.19.3+k3s3
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: I1118 20:31:05.991921    5531 deprecated_insecure_serving.go:53] Serving insecurely on 127.0.0.1:10252
Nov 18 20:31:05 iZ6weix7w7e0sy67ak2vt0Z k3s: time="2020-11-18T20:31:05.992604473+08:00" level=fatal msg="server stopped: http: Server closed"
Nov 18 20:31:06 iZ6weix7w7e0sy67ak2vt0Z systemd: k3s.service: main process exited, code=exited, status=1/FAILURE
Nov 18 20:31:06 iZ6weix7w7e0sy67ak2vt0Z systemd: Unit k3s.service entered failed state.
Nov 18 20:31:06 iZ6weix7w7e0sy67ak2vt0Z systemd: k3s.service failed.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 31 (24 by maintainers)

Most upvoted comments

This problem seems still exist when i use the newest file download from https://get.k3s.io/

Thanks @janeczku Looking through the prior changes related to this, I don’t think we should be setting kubelet-cgroups and runtime-cgroups the way we are.

https://github.com/rancher/k3s/pull/133/commits/602f0d70b446abe3161cbe603dbda0089a8f877e

Setting the according --kubelet-arg runtime-cgroups=/system.slice/docker.service --kubelet-arg kubelet-cgroups=/system.slice/k3s.service allows K3s to start no problem on CentOS 7.8 and above.

Workaround:

1. curl https://releases.rancher.com/install-docker/19.03.sh | sh
2. systemctl enable docker (and disable firewalld if that's enabled)
3. sudo setenforce 0
4. sudo curl -sfL https://get.k3s.io | INSTALL_K3S_SKIP_START=true INSTALL_K3S_EXEC="server --disable traefik --docker --kubelet-arg runtime-cgroups=/system.slice/docker.service --kubelet-arg kubelet-cgroups=/system.slice/k3s.service" sh
5. Add "Wants=docker.service", "After=docker.service" to /etc/systemd/system/k3s.service
6. sudo systemctl daemon-reload
7. sudo systemctl start k3s