k3s: invalid capacity 0 on image filesystem warning when starting k3s node

Version:

root@ip-10-100-105-140:~# k3s -v
k3s version v1.18.3+k3s1 (96653e8d)

K3s arguments: /usr/local/bin/k3s server --cluster-cidr 172.16.0.0/16 --service-cidr 192.168.0.0/16 --cluster-dns 192.168.0.10 --no-deploy traefik --kube-apiserver-arg feature-gates=ServiceTopology=true,EndpointSlice=true

Describe the bug When starting a node, getting the following warning in the k8s events:

24m         Warning   InvalidDiskCapacity       node/ip-10-100-105-140             invalid capacity 0 on image filesystem

To Reproduce

  1. Install k3s.
  2. Run systemctl restart k3s

Expected behavior Should not see this warning.

Actual behavior FYI all node events:

$ kubectl get events | grep node
24m         Normal    Starting                  node/ip-10-100-105-140             Starting kube-proxy.
24m         Normal    Starting                  node/ip-10-100-105-140             Starting kubelet.
24m         Warning   InvalidDiskCapacity       node/ip-10-100-105-140             invalid capacity 0 on image filesystem
24m         Normal    NodeHasSufficientMemory   node/ip-10-100-105-140             Node ip-10-100-105-140 status is now: NodeHasSufficientMemory
24m         Normal    NodeHasNoDiskPressure     node/ip-10-100-105-140             Node ip-10-100-105-140 status is now: NodeHasNoDiskPressure
24m         Normal    NodeHasSufficientPID      node/ip-10-100-105-140             Node ip-10-100-105-140 status is now: NodeHasSufficientPID
24m         Normal    NodeNotReady              node/ip-10-100-105-140             Node ip-10-100-105-140 status is now: NodeNotReady
24m         Normal    NodeAllocatableEnforced   node/ip-10-100-105-140             Updated Node Allocatable limit across pods
24m         Normal    NodeReady                 node/ip-10-100-105-140             Node ip-10-100-105-140 status is now: NodeReady

Additional context / logs Seeing these messages in the logs:

root@ip-10-100-105-140:~# journalctl -u k3s | grep "invalid capacity"
May 27 21:09:46 ip-10-100-105-140 k3s[1444]: E0527 21:09:46.026431    1444 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
May 27 21:09:46 ip-10-100-105-140 k3s[1444]: E0527 21:09:46.027279    1444 kubelet.go:1301] Image garbage collection failed multiple times in a row: invalid capacity 0 on image filesystem
Jun 01 18:32:56 ip-10-100-105-140 k3s[5512]: E0601 18:32:56.177047    5512 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
Jun 02 05:39:29 ip-10-100-105-140 k3s[15577]: E0602 05:39:29.274658   15577 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem

gz#10525

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 16
  • Comments: 25 (13 by maintainers)

Commits related to this issue

Most upvoted comments

Hello to everyone

I was getting the same error when I installed kubernetes cluster via kubeadm. After reading all the comments on the subject, I thought that the problem might be caused by containerd and the following two commands solved my problem, maybe it can help

systemctl restart containerd

systemctl restart kubelet

I’d argue that if it’s harmless, than it should not be logged as a node warning. If it’s expected to take a minute to collect stats, then wait a minute before creating a warning event. Attempting to use node warnings to alert/notify/page our ops staff that there’s a potential problem or a concerning event that has occurred that may require attention.

The invalid disk capacity warning does not get triggered when rebooting a RKE-based Kubernetes node:

> kubectl get events -n default | grep node
12m         Normal    NodeHasSufficientMemory   node/ip-10-0-2-10   Node ip-10-0-2-10 status is now: NodeHasSufficientMemory
12m         Normal    NodeHasNoDiskPressure     node/ip-10-0-2-10   Node ip-10-0-2-10 status is now: NodeHasNoDiskPressure
12m         Normal    NodeHasSufficientPID      node/ip-10-0-2-10   Node ip-10-0-2-10 status is now: NodeHasSufficientPID
12m         Normal    NodeAllocatableEnforced   node/ip-10-0-2-10   Updated Node Allocatable limit across pods
12m         Warning   Rebooted                  node/ip-10-0-2-10   Node ip-10-0-2-10 has been rebooted, boot id: 174f565e-022c-4e1f-8fed-8919fbfa3ff8
12m         Normal    NodeNotReady              node/ip-10-0-2-10   Node ip-10-0-2-10 status is now: NodeNotReady
12m         Normal    Starting                  node/ip-10-0-2-10   Starting kube-proxy.
11m         Normal    NodeReady                 node/ip-10-0-2-10   Node ip-10-0-2-10 status is now: NodeReady
11m         Normal    RegisteredNode            node/ip-10-0-2-10   Node ip-10-0-2-10 event: Registered Node ip-10-0-2-10 in Controller

Are you talking about the invalid capacity 0 on image filesystem warning? As described at https://github.com/k3s-io/k3s/issues/1857#issuecomment-637694918 this is just a warning that is logged once at startup before the system collects statistics in the background. It does not indicate any sort of fault or error.

Hello to everyone

I was getting the same error when I installed kubernetes cluster via kubeadm. After reading all the comments on the subject, I thought that the problem might be caused by containerd and the following two commands solved my problem, maybe it can help

systemctl restart containerd

systemctl restart kubelet

I had a ‘NotReady’ status for master/worker nodes.

This worked perfectly @selcukmeral. Thanks a lot!

They are both safe to ignore.

mine got the same error and has a problem. all my worker node status Notready.

@brandond can you find the issue where they’re working cadvisor stats collection and link it here?