kubernetes: kubelet SyncLoop hangs on "os.Stat" forever if there is an unresponsive NFS volume
What happened:
We saw this several times: after restarting kubelet, kubelet didn’t creating new Pods. The healthz endpoint showed that the syncloop
was stuck:
# curl -k https://127.0.0.1:10250/healthz
[+]ping ok
[+]log ok
[-]syncloop failed: reason withheld
healthz check failed
From the stack trace, the goroutine was stuck in the call of os.Stat
when doing HandlePodCleanups
, so no other updates can be processed:
goroutine 251 [syscall, 939 minutes]:
syscall.Syscall6(0x106, 0xffffffffffffff9c, 0xc001357440, 0xc0015a2378, 0x0, 0x0, 0x0, 0xc0017d3320, 0xc0017d3338, 0x411b98)
/usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
syscall.fstatat(0xffffffffffffff9c, 0xc001357320, 0x5e, 0xc0015a2378, 0x0, 0xc001357129, 0x8)
/usr/local/go/src/syscall/zsyscall_linux_amd64.go:1480 +0xc8
syscall.Stat(...)
/usr/local/go/src/syscall/syscall_linux_amd64.go:65
os.statNolog(0xc001357320, 0x5e, 0x3cf2e80, 0x0, 0x6, 0x7f154449dec8)
/usr/local/go/src/os/stat_unix.go:31 +0x6e
os.Stat(0xc001357320, 0x5e, 0xc000d56b68, 0x3, 0xc000facb40, 0x48)
/usr/local/go/src/os/stat.go:13 +0x4d
k8s.io/kubernetes/vendor/k8s.io/mount-utils.(*Mounter).IsLikelyNotMountPoint(0xc000becfc0, 0xc001357320, 0x5e, 0xc001302700, 0x1, 0x1)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/mount-utils/mount_linux.go:284 +0x3c
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).getMountedVolumePathListFromDisk(0xc0000c5500, 0xc00097b798, 0x24, 0xc001449e90, 0xc0017d35b0, 0xe73afb5b, 0xd6f7addb2f85b69d, 0x3929dc9)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_getters.go:360 +0x110
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).podVolumesExist(0xc0000c5500, 0xc00097b798, 0x24, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_volumes.go:64 +0xb4
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).cleanupOrphanedPodCgroups(0xc0000c5500, 0x4fe5880, 0xc0017b8e00, 0xc001603920, 0xc00117f600, 0x5, 0x8)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1955 +0x3cd
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).HandlePodCleanups(0xc0000c5500, 0x0, 0x0)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1130 +0x659
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoopIteration(0xc0000c5500, 0xc000c1e900, 0x4fd5fe0, 0xc0000c5500, 0xc000291260, 0xc0002912c0, 0xc001006a20, 0x1)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:2000 +0x18e2
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).syncLoop(0xc0000c5500, 0xc000c1e900, 0x4fd5fe0, 0xc0000c5500)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1867 +0x3f2
k8s.io/kubernetes/pkg/kubelet.(*Kubelet).Run(0xc0000c5500, 0xc000c1e900)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:1447 +0x2a5
created by k8s.io/kubernetes/cmd/kubelet/app.startKubelet
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubelet/app/server.go:1183 +0x5f
Listing the volumes of orphaned Pods on this Node, there was an unresponsive NFS volume, stat it will hang forever, so the above goroutine should be stuck in cleaning up this Pod:
# ls -l volumes/
total 8
drwxr-x--- 3 root root 4096 Apr 28 11:18 kubernetes.io~nfs
drwxr-xr-x 2 root root 4096 Apr 28 11:21 kubernetes.io~secret
# dmesg |grep "not responding" | tail
[106629.084267] nfs: server 10.150.200.91 not responding, timed out
[106649.568224] nfs: server 10.150.200.91 not responding, timed out
[107730.908287] nfs: server 10.150.200.91 not responding, timed out
[107743.196391] nfs: server 10.150.200.91 not responding, timed out
[108832.732238] nfs: server 10.150.200.91 not responding, timed out
[108845.020241] nfs: server 10.150.200.91 not responding, timed out
[109934.556157] nfs: server 10.150.200.91 not responding, timed out
[109950.940236] nfs: server 10.150.200.91 not responding, timed out
[111032.284325] nfs: server 10.150.200.91 not responding, timed out
[111048.668198] nfs: server 10.150.200.91 not responding, timed out
The NFS server IP is a service IP, whose backend Pod happened to be scheduled to this Node. However, as syncLoop
was stuck, it didn’t handle Pod Add/Update from apiserver and didn’t create the backend Pod, hence a dead lock.
Searched PR and issue history, it seems there were several ones fixing NFS volume issues but looks like not covering this case: https://github.com/kubernetes/kubernetes/pull/35038 https://github.com/kubernetes/kubernetes/pull/37255 https://github.com/kubernetes/kubernetes/pull/84206 (however reverted) https://github.com/kubernetes/kubernetes/issues/31272 (still open) Is there anyone still working on it?
What you expected to happen:
The main syncloop of kubelet shouldn’t hang due to an unresponsive NFS volume, leading to no Pods can be created.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version
): v1.20.5
About this issue
- Original URL
- State: open
- Created 3 years ago
- Reactions: 5
- Comments: 38 (30 by maintainers)
Commits related to this issue
- Fix hung volume wedge the kubelet see #31272, #101622 Signed-off-by: j4ckstraw <j4ckstraw@foxmail.com> — committed to j4ckstraw/kubernetes by j4ckstraw a year ago
I have no idea. @MadhavJivrajani you may ask sig-node guys in the slack or weekly meetings. Or send a PR and discuss it in the PR.
The only nfs related timeout that I can found is 1 minute in https://github.com/kubernetes/kubernetes/blob/4ad08fa5b611826505fa2892cdce463f12d32b35/pkg/volume/nfs/nfs.go#L67
/sig storage /sig node similar with #100178 and #86596
And the problem code is in
./vendor/k8s.io/utils/mount
According to @msau42 and @brahmaroutu 's comments in https://github.com/kubernetes/utils/pull/200#issuecomment-781767384, it will be removed later, but I checked the new code in mount-utils. The func is not changed: https://github.com/kubernetes/mount-utils/blob/19c6fae41ad49f3e4026f28b043f41343aa852fd/mount_linux.go#L294-L316Possible fix to this would be something like below: