k3s: invalid capacity 0 on image filesystem warning when starting k3s node
Version:
root@ip-10-100-105-140:~# k3s -v
k3s version v1.18.3+k3s1 (96653e8d)
K3s arguments: /usr/local/bin/k3s server --cluster-cidr 172.16.0.0/16 --service-cidr 192.168.0.0/16 --cluster-dns 192.168.0.10 --no-deploy traefik --kube-apiserver-arg feature-gates=ServiceTopology=true,EndpointSlice=true
Describe the bug When starting a node, getting the following warning in the k8s events:
24m Warning InvalidDiskCapacity node/ip-10-100-105-140 invalid capacity 0 on image filesystem
To Reproduce
- Install k3s.
- Run
systemctl restart k3s
Expected behavior Should not see this warning.
Actual behavior FYI all node events:
$ kubectl get events | grep node
24m Normal Starting node/ip-10-100-105-140 Starting kube-proxy.
24m Normal Starting node/ip-10-100-105-140 Starting kubelet.
24m Warning InvalidDiskCapacity node/ip-10-100-105-140 invalid capacity 0 on image filesystem
24m Normal NodeHasSufficientMemory node/ip-10-100-105-140 Node ip-10-100-105-140 status is now: NodeHasSufficientMemory
24m Normal NodeHasNoDiskPressure node/ip-10-100-105-140 Node ip-10-100-105-140 status is now: NodeHasNoDiskPressure
24m Normal NodeHasSufficientPID node/ip-10-100-105-140 Node ip-10-100-105-140 status is now: NodeHasSufficientPID
24m Normal NodeNotReady node/ip-10-100-105-140 Node ip-10-100-105-140 status is now: NodeNotReady
24m Normal NodeAllocatableEnforced node/ip-10-100-105-140 Updated Node Allocatable limit across pods
24m Normal NodeReady node/ip-10-100-105-140 Node ip-10-100-105-140 status is now: NodeReady
Additional context / logs Seeing these messages in the logs:
root@ip-10-100-105-140:~# journalctl -u k3s | grep "invalid capacity"
May 27 21:09:46 ip-10-100-105-140 k3s[1444]: E0527 21:09:46.026431 1444 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
May 27 21:09:46 ip-10-100-105-140 k3s[1444]: E0527 21:09:46.027279 1444 kubelet.go:1301] Image garbage collection failed multiple times in a row: invalid capacity 0 on image filesystem
Jun 01 18:32:56 ip-10-100-105-140 k3s[5512]: E0601 18:32:56.177047 5512 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Jun 02 05:39:29 ip-10-100-105-140 k3s[15577]: E0602 05:39:29.274658 15577 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
gz#10525
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 16
- Comments: 25 (13 by maintainers)
Commits related to this issue
- fix to error in https://github.com/k3s-io/k3s/issues/1857#issuecomment-950154218 — committed to aqua-ps/aqua-training-userscript by andreazorzetto 2 years ago
- fix to error in https://github.com/k3s-io/k3s/issues/1857#issuecomment-950154218 — committed to aqua-ps/aqua-training-userscript by andreazorzetto 2 years ago
Hello to everyone
I was getting the same error when I installed kubernetes cluster via kubeadm. After reading all the comments on the subject, I thought that the problem might be caused by containerd and the following two commands solved my problem, maybe it can help
systemctl restart containerdsystemctl restart kubeletI’d argue that if it’s harmless, than it should not be logged as a node warning. If it’s expected to take a minute to collect stats, then wait a minute before creating a warning event. Attempting to use node warnings to alert/notify/page our ops staff that there’s a potential problem or a concerning event that has occurred that may require attention.
The invalid disk capacity warning does not get triggered when rebooting a RKE-based Kubernetes node:
Are you talking about the
invalid capacity 0 on image filesystemwarning? As described at https://github.com/k3s-io/k3s/issues/1857#issuecomment-637694918 this is just a warning that is logged once at startup before the system collects statistics in the background. It does not indicate any sort of fault or error.I had a ‘NotReady’ status for master/worker nodes.
This worked perfectly @selcukmeral. Thanks a lot!
They are both safe to ignore.
mine got the same error and has a problem. all my worker node status Notready.
@brandond can you find the issue where they’re working cadvisor stats collection and link it here?