rancher: kubelet - failed to collect filesystem stats - rootDiskErr: du command failed
Rancher Versions: Server:1.4.0 healthcheck: ipsec: network-services: scheduler: kubernetes (if applicable): rancher/k8s:v1.5.2-rancher1-2 Docker Version: 1.12.6 OS and where are the hosts located? (cloud, bare metal, etc): RHEL7.3, vmware Setup Details: (single node rancher vs. HA rancher, internal DB vs. external DB) single node rancher, external mysql db Environment Type: (Cattle/Kubernetes/Swarm/Mesos) k8s
2/9/2017 11:54:51 AME0209 17:54:51.517354 25464 fsHandler.go:121] failed to collect filesystem stats - rootDiskErr: du command failed on /mnt/dockerdata/overlay/a7a4ef47300a0b3fe397070b096505a624f6b4ec292929e505ea109ffa7f801c with output stdout: 943940 /mnt/dockerdata/overlay/a7a4ef47300a0b3fe397070b096505a624f6b4ec292929e505ea109ffa7f801c
2/9/2017 11:54:51 AM, stderr: du: cannot access '/mnt/dockerdata/overlay/a7a4ef47300a0b3fe397070b096505a624f6b4ec292929e505ea109ffa7f801c/merged/proc/17285/task/17285/fd/4': No such file or directory
2/9/2017 11:54:51 AMdu: cannot access '/mnt/dockerdata/overlay/a7a4ef47300a0b3fe397070b096505a624f6b4ec292929e505ea109ffa7f801c/merged/proc/17285/task/17285/fdinfo/4': No such file or directory
2/9/2017 11:54:51 AMdu: cannot access '/mnt/dockerdata/overlay/a7a4ef47300a0b3fe397070b096505a624f6b4ec292929e505ea109ffa7f801c/merged/proc/17285/fd/4': No such file or directory
2/9/2017 11:54:51 AMdu: cannot access '/mnt/dockerdata/overlay/a7a4ef47300a0b3fe397070b096505a624f6b4ec292929e505ea109ffa7f801c/merged/proc/17285/fdinfo/4': No such file or directory
2/9/2017 11:54:51 AM - exit status 1, rootInodeErr: <nil>, extraDiskErr: <nil>
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 1
- Comments: 17
@pulberg - I attach a new device to my machine, mounted it on
/mnt/dockerdata
and started a pod with this spec
My only log lines are
i deleted the pod using docker kill, stopped the rc. I couldn’t still reproduce the issue.
However, I was able to find out that the error you are seeing is from cadvisor and it is generally caused by docker not cleaning up cgroups after deleting containers.
https://github.com/kubernetes/kubernetes/issues/16651 https://github.com/kubernetes/kubernetes/issues/21022