kubernetes: Kubelet : Failed to start ContainerManager Cannot set property TasksAccounting, or unknown property.

Issue

During process starting kubelet v1.14.1 on Centos7, the service exists and the journal reports such error

[centos@n114-test ~]$ sudo journalctl -xeu kubelet
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.664311   20002 volume_manager.go:248] Starting Kubelet Volume Manager
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.664431   20002 controller.go:115] failed to ensure node lease exists, will retry in 200ms, error: Get https://172.16.195.5:6443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-l
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.665176   20002 desired_state_of_world_populator.go:130] Desired state populator starts to run
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.667120   20002 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://172.16.195.5:6443/apis/node.k8s.io/v1beta1/runtimeclasses
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.667475   20002 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitiali
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.731525   20002 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.734039   20002 cpu_manager.go:155] [cpumanager] starting with none policy
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.734053   20002 cpu_manager.go:156] [cpumanager] reconciling every 10s
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.734061   20002 policy_none.go:42] [cpumanager] none policy: Start
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.764283   20002 kubelet.go:1823] skipping pod synchronization - container runtime status check may not have completed yet.
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.764416   20002 kubelet_node_status.go:283] Setting node annotation to enable volume controller attach/detach
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.764496   20002 kubelet.go:2244] node "n114-test.localdomain" not found
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: I0419 11:20:09.767100   20002 kubelet_node_status.go:72] Attempting to register node n114-test.localdomain
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.767486   20002 kubelet_node_status.go:94] Unable to register node "n114-test.localdomain" with API server: Post https://172.16.195.5:6443/api/v1/nodes: dial tcp 172.16.195.5:6443: con
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.770078   20002 node_container_manager_linux.go:50] Failed to create ["kubepods"] cgroup
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: F0419 11:20:09.770102   20002 kubelet.go:1359] Failed to start ContainerManager Cannot set property TasksAccounting, or unknown property.
Apr 19 11:20:09 n114-test.localdomain systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Apr 19 11:20:09 n114-test.localdomain systemd[1]: Unit kubelet.service entered failed state.
Apr 19 11:20:09 n114-test.localdomain systemd[1]: kubelet.service failed.

The root cause is certainly due to such messages :

Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: E0419 11:20:09.770078   20002 node_container_manager_linux.go:50] Failed to create ["kubepods"] cgroup
Apr 19 11:20:09 n114-test.localdomain kubelet[20002]: F0419 11:20:09.770102   20002 kubelet.go:1359] Failed to start ContainerManager Cannot set property TasksAccounting, or unknown property.

How can it be resolved ?

Infos

Docker

sudo docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 7
Server Version: 18.06.2-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: false
Logging Driver: json-file
Cgroup Driver: systemd
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-514.2.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 16
Total Memory: 31.26GiB
Name: n114-test.localdomain
ID: TTCA:DPHF:H2LV:4KRS:D74C:6EUZ:HZJW:KT3B:P55M:JYXP:FZL6:GYJ2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

OS

Linux n114-test.localdomain 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

Kubernetes

kubelet --version
Kubernetes v1.14.1

kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

kubeadm version: &version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 17 (5 by maintainers)

Most upvoted comments

Had the same issue, It got fixed after I updated the systemd libraries to the latest available on my CentOS 7

Actually I ran a yum update so it updated a bunch of other libs too. Now I can successfully start the kubelet service with

kubeadm.x86_64 1.14.1-0 @kubernetes kubectl.x86_64 1.14.1-0 @kubernetes kubelet.x86_64 1.14.1-0 @kubernetes kubernetes-cni.x86_64 0.7.5-0 @kubernetes

docker-ce.x86_64 3:18.09.5-3.el7 @docker docker-ce-cli.x86_64 1:18.09.5-3.el7 @docker

systemd.x86_64 219-62.el7_6.6 @updates systemd-libs.i686 219-62.el7_6.6 @updates systemd-libs.x86_64 219-62.el7_6.6 @updates

Hope this helps.

I encountered the same issue, and fixed it by yum update . But in my case, yum update finally upgraded 600+ packages, it may introduce risks in production. I optionally update systemd, and it works as well.

Old version: 219-30.el7_3.9 New version: 219-62.el7_6.9

I guess it is a bug in 219-30.el7_3.9.

In runc, there is a piece of code which try to set property TasksAccounting=true on kubepods.slice. With 219-30.el7_3.9, I tried systemctl set-property kubepods.slice TasksAccounting=yes and create /run/systemd/system/kubepods.slice.d/50-TasksAccounting.conf, but these approaches failed to set TasksAccounting. With 219-62.el7_6.9, it works well.

BTW, in the systemd official change logs, TasksAccounting is introduced in systemd 227, confusing.

fix this issue, you need update systemd by centos , yum install -y systemd or update when you docker run bye systemd ,like this “ --exec-opt native.cgroupdriver=systemd “”