kubernetes: 'Stat fs failed: no such file or directory' error in kubelet logs

Running Kubernetes 1.4.1 at AWS (Ubuntu 14.04 LTS). Kubernetes cluster is spinned with ‘contrib/ansible’ playbook with the only change of --cloud-provider aws argument added to all cluster internal services (such as kubelet).

After spinning up and terminating several pods with config like below

apiVersion: extensions/v1beta1 
kind: Deployment 
metadata:
  name: kafka1
spec:
  replicas: 1
  template:
    spec:
      containers:
      - name: kafka1
        image: daniilyar/kafka
        ports:
        - containerPort: 9092 
          name: clientconnct
          protocol: TCP
        volumeMounts:
        - mountPath: /kafka
          name: storage
      volumes:
      - name: storage
        awsElasticBlockStore:
          volumeID: vol-56676d83
          fsType: ext4

I see a following error in kubelet logs logged each 40-60 sec:

E1018 21:03:09.616581   22780 fs.go:332] Stat fs failed. Error: no such file or directory

Error contains a different line number than in https://github.com/kubernetes/kubernetes/issues/17725 , so I assume this one is probably not a duplicate.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 6
  • Comments: 36 (9 by maintainers)

Most upvoted comments

Just came across a similar issue - lots of “Stat fs failed: no such file or directory” messages after kubelet restart on node with pods having persistent volumes. To investigate the issue I modified logging output and get more information about file or directory name that triggers this message. This message is issued by cAdvisor GetFsInfoForPath function in fs/fs.go (latest release). After changing this line to:

glog.Errorf("Stat fs failed. Error: %v, mountpoint: %s", err, partition.mountpoint)

I started to get messages like:

Stat fs failed: no such file or directory, mountpoint: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volumes/mounts/[Datastore]\040Folder/volume.vmdk

So what’s happening here:

  • After kubelet restarts, cAdvisor reads list of mountpoints from /proc/self/mountinfo using github.com/docker/docker/pkg/mount package (looks like it’s not maintained any longer as I didn’t find it in docker github repository) see here
  • mountpoints in /proc/self/mountinfo are stored in fstab format where spaces in file path are replaced with “\040”
  • “\040” is not valid escape for Linux statfs() system call and it returns “no such file or directory” error. Same happens when trying to use such path in bash

When kubelet is started on empty node and pods with persistent volumes are added later mountpoints array is populated directly without reading /proc/self/mountinfo and with correct escape sequence so errors don’t occur.

Summary: in my case errors were triggered by the following factors:

  • kubelet is restarted on the node having pods with persistent volumes attached
  • persistent volume names contain space (this is standard for VMware volumes to have space between datastore and folder)

Possible solution would be to replace “\040” with "\ " on each mountpoint read from /proc/self/mountinfo in fs.go:NewFsInfo()

Correction: after few experiments I found that space shouldn’t be escaped at all for syscall.Statfs so “\040” must be replaced with " ". This little patch allowed me to get rid of the issue:

diff --git a/vendor/github.com/docker/docker/pkg/mount/mountinfo_linux.go b/vendor/github.com/docker/docker/pkg/mount/mountinfo_linux.go
index be69fee..7b3d0b8 100644
--- a/vendor/github.com/docker/docker/pkg/mount/mountinfo_linux.go
+++ b/vendor/github.com/docker/docker/pkg/mount/mountinfo_linux.go
@@ -73,6 +73,7 @@ func parseInfoFile(r io.Reader) ([]*Info, error) {
                        p.Optional = optionalFields
                }

+               p.Mountpoint = strings.Replace(p.Mountpoint, "\\040", " ", -1)
                p.Fstype = postSeparatorFields[0]
                p.Source = postSeparatorFields[1]
                p.VfsOpts = strings.Join(postSeparatorFields[2:], " ")

Any updates? This is very annoying. How can we help to resolve this?

I am seeing this with 1.8.0, on CentOS with overlay2:

Oct 07 22:03:51 staging-head-1 kubelet[2612]: E1007 22:03:51.005460    2612 fs.go:418] Stat fs failed. Error: no such file or directory
[pid  2784] statfs("/var/lib/docker/containers/5a1e3d849d13a662936aae26c318ae11a8cf2e69185d545285ef9fc02be8937e/shm", 0xc421538fe0) = -1 ENOENT (No such file or directory)

The path /var/lib/docker/containers/5a1e3d849d13a662936aae26c318ae11a8cf2e69185d545285ef9fc02be8937e does not exist at all.

Using Docker 17.03.2, Kubernetes 1.8.0 on CentOS 7.4 host.

I got the same problem on CentOS 7.3, Docker 1.17.3, Kubernetes 1.9.0, do you find the reason causing this problem now?

Same here on v1.9.0 deployed via kubeadm. Probably have several GB of these in journal. 😦

Too bad it’s not telling which file or directory is missing, so we could at least workaround and stop the spam. 😦