rook: Kubelet log spamed by "Operation for "/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock" failed."
Is this a bug report or feature request?
- Bug Report Actually I’m not sure, this is a bug or normal behavior. I try rook v1.1 the did clean up as described in https://rook.io/docs/rook/v1.1/ceph-teardown.html
Now, now more rook’s stuff in my cluster, but when I check kubelet log its spammed by:
nov. 22 01:50:30 KubeNode1 kubelet[1215]: E1122 01:50:30.589128 1215 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock" failed. No retries permitted until 2019-11-22 01:51:34.589085973 +0100 CET m=+230.577305520 (durationBeforeRetry 1m4s). Error: "RegisterPlugin error -- dial failed at socket /var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock, err: failed to dial socket /var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock, err: context deadline exceeded"
nov. 22 01:50:30 KubeNode1 kubelet[1215]: W1122 01:50:30.589152 1215 asm_amd64.s:1337] Failed to dial /var/lib/kubelet/plugins/rbd.csi.ceph.com/csi.sock: context canceled; please retry.
nov. 22 01:50:30 KubeNode1 kubelet[1215]: E1122 01:50:30.589576 1215 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins/rook-ceph.rbd.csi.ceph.com/csi.sock" failed. No retries permitted until 2019-11-22 01:51:34.589552081 +0100 CET m=+230.577771628 (durationBeforeRetry 1m4s). Error: "RegisterPlugin error -- dial failed at socket /var/lib/kubelet/plugins/rook-ceph.rbd.csi.ceph.com/csi.sock, err: failed to dial socket /var/lib/kubelet/plugins/rook-ceph.rbd.csi.ceph.com/csi.sock, err: context deadline exceeded"
Environment:
- OS (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
- Kernel (e.g.
uname -a): Linux KubeNode2 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5 (2019-06-19) x86_64 GNU/Linux - Cloud provider or hardware configuration: VM on top hypervisor (baremetal)
- Rook version (use
rook versioninside of a Rook Pod): v1.1 - Storage backend version (e.g. for ceph do
ceph -v): destroyed so can not check, i follow v1.1 documentation - Kubernetes version (use
kubectl version): Client Version: version.Info{Major:“1”, Minor:“15”, GitVersion:“v1.15.2”, GitCommit:“f6278300bebbb750328ac16ee6dd3aa7d3549568”, GitTreeState:“clean”, BuildDate:“2019-08-05T09:23:26Z”, GoVersion:“go1.12.5”, Compiler:“gc”, Platform:“linux/amd64”} - Kubernetes cluster type (e.g. Tectonic, GKE, OpenShift):
- Storage backend status (e.g. for Ceph use
ceph healthin the Rook Ceph toolbox): destroyed as i did cleanup
Thank you
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 4
- Comments: 16 (8 by maintainers)
The registry socket removal functionality is added in https://github.com/kubernetes-csi/node-driver-registrar/pull/61. if you are using node-driver-registrar v.2.0.0 you should not see this issue