kubernetes: [static PVC] UnstageVolume is not sent to the CSI driver when all pods using PVC are deleted

What happened:

Kubelet is not sending any unstageVolume request when all the applications using the PVC’s are deleted

What you expected to happen:

Kubelet should send the request to Unstage the volume when no pods are using the PVC on the node.

How to reproduce it (as minimally and precisely as possible):

  • Create 2 static PVC’s and static PV’s pointing to the same cephfs volumes
    • Create static PVC (static-pvc-1) and static PV(static-pv-1). add cephfs volume to the static PV to mount the volume
    • Create static PVC (static-pvc-2) and static PV(static-pv-2). add same cephfs volume to the static PV to mount the volume
  • create pod to use the static PVC (static-pvc-1)
  • Create another pod to use the other static PVC (static-pvc-2)

Now everything seems file static PVC got created and application pods are up and running The idea here is to share the single ceph volume across different applications running on different namespaces

  • Delete the application pods
  • NodeUnpublish call is sent to CSI driver
  • NodeUnstage is not sent to the CSI Driver and stale mounts are left on the nodes.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): 1.22.2
  • Cloud provider or hardware configuration:
  • OS (e.g: cat /etc/os-release): minikube
  • Kernel (e.g. uname -a):
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • Others:

CSI logs

I0928 10:25:14.066729       1 utils.go:176] ID: 5 Req-ID: cephfs-samenode-test2 GRPC call: /csi.v1.Node/NodeStageVolume
I0928 10:25:14.067136       1 utils.go:180] ID: 5 Req-ID: cephfs-samenode-test2 GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"rook-ceph","fsName":"myfs","rootPath":"/","staticVolume":"true"},"volume_id":"cephfs-samenode-test2"}
I0928 10:25:14.084574       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:25:14.084621       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:25:14.084635       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:25:14.084759       1 volumemounter.go:134] requested mounter: , chosen mounter: kernel
I0928 10:25:14.084836       1 nodeserver.go:157] ID: 5 Req-ID: cephfs-samenode-test2 cephfs: mounting volume cephfs-samenode-test2 with Ceph kernel client
I0928 10:25:14.100571       1 cephcmds.go:60] ID: 5 Req-ID: cephfs-samenode-test2 command succeeded: modprobe [ceph]
I0928 10:25:14.184580       1 cephcmds.go:60] ID: 5 Req-ID: cephfs-samenode-test2 command succeeded: mount [-t ceph 192.168.121.71:6789:/ /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount -o name=csi-cephfs-node,secretfile=/tmp/csi/keys/keyfile-309756345,mds_namespace=myfs,_netdev]
I0928 10:25:14.192817       1 nodeserver.go:133] ID: 5 Req-ID: cephfs-samenode-test2 cephfs: successfully mounted volume cephfs-samenode-test2 to /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount
I0928 10:25:14.192949       1 utils.go:187] ID: 5 Req-ID: cephfs-samenode-test2 GRPC response: {}
I0928 10:25:14.201646       1 utils.go:176] ID: 6 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:14.201842       1 utils.go:180] ID: 6 GRPC request: {}
I0928 10:25:14.202325       1 utils.go:187] ID: 6 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:14.209319       1 utils.go:176] ID: 7 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:14.209443       1 utils.go:180] ID: 7 GRPC request: {}
I0928 10:25:14.209624       1 utils.go:187] ID: 7 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:14.211613       1 utils.go:176] ID: 8 Req-ID: cephfs-samenode-test2 GRPC call: /csi.v1.Node/NodePublishVolume
I0928 10:25:14.211783       1 utils.go:180] ID: 8 Req-ID: cephfs-samenode-test2 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount","target_path":"/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"rook-ceph","fsName":"myfs","rootPath":"/","staticVolume":"true"},"volume_id":"cephfs-samenode-test2"}
I0928 10:25:14.227599       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:25:14.227624       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:25:14.227635       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:25:14.239741       1 cephcmds.go:60] ID: 8 Req-ID: cephfs-samenode-test2 command succeeded: mount [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount]
I0928 10:25:14.239767       1 nodeserver.go:266] ID: 8 Req-ID: cephfs-samenode-test2 cephfs: successfully bind-mounted volume cephfs-samenode-test2 to /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount
I0928 10:25:14.239826       1 utils.go:187] ID: 8 Req-ID: cephfs-samenode-test2 GRPC response: {}



I0928 10:25:28.831021       1 utils.go:176] ID: 9 GRPC call: /csi.v1.Identity/Probe
I0928 10:25:28.833304       1 utils.go:180] ID: 9 GRPC request: {}
I0928 10:25:28.835337       1 utils.go:187] ID: 9 GRPC response: {}
I0928 10:25:50.906578       1 utils.go:176] ID: 10 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:50.907906       1 utils.go:180] ID: 10 GRPC request: {}
I0928 10:25:50.912855       1 utils.go:187] ID: 10 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:50.920005       1 utils.go:176] ID: 11 GRPC call: /csi.v1.Node/NodeGetVolumeStats
I0928 10:25:50.923981       1 utils.go:180] ID: 11 GRPC request: {"volume_id":"cephfs-samenode-test2","volume_path":"/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount"}
I0928 10:25:50.939942       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:25:50.945679       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:25:50.945699       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:25:50.947768       1 utils.go:187] ID: 11 GRPC response: {"usage":[{"available":50990153728,"total":50990153728,"unit":1},{"available":-1,"unit":2,"used":1}]}



I0928 10:25:59.690547       1 utils.go:176] ID: 12 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:59.695714       1 utils.go:180] ID: 12 GRPC request: {}
I0928 10:25:59.700525       1 utils.go:187] ID: 12 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:59.718640       1 utils.go:176] ID: 13 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:59.718720       1 utils.go:180] ID: 13 GRPC request: {}
I0928 10:25:59.725742       1 utils.go:187] ID: 13 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:59.734707       1 utils.go:176] ID: 14 Req-ID: cephfs-samenode-test1 GRPC call: /csi.v1.Node/NodeStageVolume
I0928 10:25:59.734822       1 utils.go:180] ID: 14 Req-ID: cephfs-samenode-test1 GRPC request: {"secrets":"***stripped***","staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"rook-ceph","fsName":"myfs","rootPath":"/","staticVolume":"true"},"volume_id":"cephfs-samenode-test1"}
I0928 10:25:59.746936       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:25:59.746973       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:25:59.746985       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:25:59.747063       1 volumemounter.go:134] requested mounter: , chosen mounter: kernel
I0928 10:25:59.747106       1 nodeserver.go:157] ID: 14 Req-ID: cephfs-samenode-test1 cephfs: mounting volume cephfs-samenode-test1 with Ceph kernel client
I0928 10:25:59.749918       1 cephcmds.go:60] ID: 14 Req-ID: cephfs-samenode-test1 command succeeded: modprobe [ceph]
I0928 10:25:59.785174       1 cephcmds.go:60] ID: 14 Req-ID: cephfs-samenode-test1 command succeeded: mount [-t ceph 192.168.121.71:6789:/ /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount -o name=csi-cephfs-node,secretfile=/tmp/csi/keys/keyfile-255082436,mds_namespace=myfs,_netdev]
I0928 10:25:59.785309       1 nodeserver.go:133] ID: 14 Req-ID: cephfs-samenode-test1 cephfs: successfully mounted volume cephfs-samenode-test1 to /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount
I0928 10:25:59.785435       1 utils.go:187] ID: 14 Req-ID: cephfs-samenode-test1 GRPC response: {}
I0928 10:25:59.795736       1 utils.go:176] ID: 15 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:59.795792       1 utils.go:180] ID: 15 GRPC request: {}
I0928 10:25:59.795876       1 utils.go:187] ID: 15 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:59.799831       1 utils.go:176] ID: 16 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:25:59.799909       1 utils.go:180] ID: 16 GRPC request: {}
I0928 10:25:59.800038       1 utils.go:187] ID: 16 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:25:59.805693       1 utils.go:176] ID: 17 Req-ID: cephfs-samenode-test1 GRPC call: /csi.v1.Node/NodePublishVolume
I0928 10:25:59.805833       1 utils.go:180] ID: 17 Req-ID: cephfs-samenode-test1 GRPC request: {"staging_target_path":"/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount","target_path":"/var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes/kubernetes.io~csi/cephfs-samenode-test1/mount","volume_capability":{"AccessType":{"Mount":{}},"access_mode":{"mode":5}},"volume_context":{"clusterID":"rook-ceph","fsName":"myfs","rootPath":"/","staticVolume":"true"},"volume_id":"cephfs-samenode-test1"}
I0928 10:25:59.825514       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:25:59.825546       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:25:59.825558       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:25:59.855039       1 cephcmds.go:60] ID: 17 Req-ID: cephfs-samenode-test1 command succeeded: mount [-o bind,_netdev /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes/kubernetes.io~csi/cephfs-samenode-test1/mount]
I0928 10:25:59.855065       1 nodeserver.go:266] ID: 17 Req-ID: cephfs-samenode-test1 cephfs: successfully bind-mounted volume cephfs-samenode-test1 to /var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes/kubernetes.io~csi/cephfs-samenode-test1/mount
I0928 10:25:59.855136       1 utils.go:187] ID: 17 Req-ID: cephfs-samenode-test1 GRPC response: {}


I0928 10:26:26.027873       1 utils.go:176] ID: 18 Req-ID: cephfs-samenode-test1 GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0928 10:26:26.028021       1 utils.go:180] ID: 18 Req-ID: cephfs-samenode-test1 GRPC request: {"target_path":"/var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes/kubernetes.io~csi/cephfs-samenode-test1/mount","volume_id":"cephfs-samenode-test1"}
I0928 10:26:26.070633       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:26:26.070669       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:26:26.070694       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:26:26.092976       1 cephcmds.go:60] ID: 18 Req-ID: cephfs-samenode-test1 command succeeded: umount [/var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes/kubernetes.io~csi/cephfs-samenode-test1/mount]
I0928 10:26:26.093100       1 nodeserver.go:311] ID: 18 Req-ID: cephfs-samenode-test1 cephfs: successfully unbinded volume cephfs-samenode-test1 from /var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes/kubernetes.io~csi/cephfs-samenode-test1/mount
I0928 10:26:26.093266       1 utils.go:187] ID: 18 Req-ID: cephfs-samenode-test1 GRPC response: {}
I0928 10:26:28.829023       1 utils.go:176] ID: 19 GRPC call: /csi.v1.Identity/Probe
I0928 10:26:28.837012       1 utils.go:180] ID: 19 GRPC request: {}
I0928 10:26:28.837273       1 utils.go:187] ID: 19 GRPC response: {}
I0928 10:27:09.372204       1 utils.go:176] ID: 20 GRPC call: /csi.v1.Node/NodeGetCapabilities
I0928 10:27:09.372511       1 utils.go:180] ID: 20 GRPC request: {}
I0928 10:27:09.373888       1 utils.go:187] ID: 20 GRPC response: {"capabilities":[{"Type":{"Rpc":{"type":1}}},{"Type":{"Rpc":{"type":2}}}]}
I0928 10:27:09.380646       1 utils.go:176] ID: 21 GRPC call: /csi.v1.Node/NodeGetVolumeStats
I0928 10:27:09.380725       1 utils.go:180] ID: 21 GRPC request: {"volume_id":"cephfs-samenode-test2","volume_path":"/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount"}
I0928 10:27:09.395938       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:27:09.395970       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:27:09.395980       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:27:09.396934       1 utils.go:187] ID: 21 GRPC response: {"usage":[{"available":50990153728,"total":50990153728,"unit":1},{"available":-1,"unit":2,"used":1}]}
I0928 10:27:22.828360       1 utils.go:176] ID: 22 Req-ID: cephfs-samenode-test2 GRPC call: /csi.v1.Node/NodeUnpublishVolume
I0928 10:27:22.828715       1 utils.go:180] ID: 22 Req-ID: cephfs-samenode-test2 GRPC request: {"target_path":"/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount","volume_id":"cephfs-samenode-test2"}
I0928 10:27:22.842143       1 mount_linux.go:173] Cannot run systemd-run, assuming non-systemd OS
I0928 10:27:22.842174       1 mount_linux.go:174] systemd-run failed with: exit status 1
I0928 10:27:22.842186       1 mount_linux.go:175] systemd-run output: System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to create bus connection: Host is down
I0928 10:27:22.862655       1 cephcmds.go:60] ID: 22 Req-ID: cephfs-samenode-test2 command succeeded: umount [/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount]
I0928 10:27:22.864055       1 nodeserver.go:311] ID: 22 Req-ID: cephfs-samenode-test2 cephfs: successfully unbinded volume cephfs-samenode-test2 from /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount
I0928 10:27:22.867113       1 utils.go:187] ID: 22 Req-ID: cephfs-samenode-test2 GRPC response: {}
I0928 10:27:28.829509       1 utils.go:176] ID: 23 GRPC call: /csi.v1.Identity/Probe
I0928 10:27:28.835192       1 utils.go:180] ID: 23 GRPC request: {}
I0928 10:27:28.837778       1 utils.go:187] ID: 23 GRPC response: {}
I0928 10:28:28.839199       1 utils.go:176] ID: 24 GRPC call: /csi.v1.Identity/Probe
I0928 10:28:28.847909       1 utils.go:180] ID: 24 GRPC request: {}
I0928 10:28:28.848182       1 utils.go:187] ID: 24 GRPC response: {}

kubelet logs

Sep 28 10:25:43 minicluster2 kubelet[16776]: I0928 10:25:43.951783   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:43 minicluster2 kubelet[16776]: I0928 10:25:43.952176   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5tcl\" (UniqueName: \"kubernetes.io/projected/1beaabe2-b082-446f-b8cd-bfcf8d340d32-kube-api-access-t5tcl\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:43 minicluster2 kubelet[16776]: E0928 10:25:43.952204   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:25:44.452155525 +0000 UTC m=+234.234328950 (durationBeforeRetry 500ms). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32")
Sep 28 10:25:44 minicluster2 kubelet[16776]: I0928 10:25:44.456932   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:44 minicluster2 kubelet[16776]: E0928 10:25:44.457199   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:25:45.457098674 +0000 UTC m=+235.239272097 (durationBeforeRetry 1s). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32")
Sep 28 10:25:45 minicluster2 kubelet[16776]: I0928 10:25:45.470720   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:45 minicluster2 kubelet[16776]: E0928 10:25:45.470877   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:25:47.470838209 +0000 UTC m=+237.253011622 (durationBeforeRetry 2s). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32")
Sep 28 10:25:47 minicluster2 kubelet[16776]: I0928 10:25:47.497899   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:47 minicluster2 kubelet[16776]: E0928 10:25:47.499275   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:25:51.499242438 +0000 UTC m=+241.281415861 (durationBeforeRetry 4s). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32")
Sep 28 10:25:51 minicluster2 kubelet[16776]: I0928 10:25:51.553956   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:51 minicluster2 kubelet[16776]: E0928 10:25:51.554135   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:25:59.55408717 +0000 UTC m=+249.336260589 (durationBeforeRetry 8s). Error: Volume has not been added to the list of VolumesInUse in the node's volume status for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32")
Sep 28 10:25:59 minicluster2 kubelet[16776]: I0928 10:25:59.566714   16776 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"cephfs-samenode-test1\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:25:59 minicluster2 kubelet[16776]: I0928 10:25:59.594876   16776 operation_generator.go:1523] Controller attach succeeded for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32") device path: ""
Sep 28 10:25:59 minicluster2 kubelet[16776]: I0928 10:25:59.669845   16776 operation_generator.go:587] MountVolume.WaitForAttach entering for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32") DevicePath ""
Sep 28 10:25:59 minicluster2 kubelet[16776]: I0928 10:25:59.688945   16776 operation_generator.go:597] MountVolume.WaitForAttach succeeded for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32") DevicePath "csi-3342daa2eb05220aa89d43f51b971f17a02e2e3a687c61d89a70a5e83ece4498"
Sep 28 10:25:59 minicluster2 kubelet[16776]: I0928 10:25:59.785870   16776 operation_generator.go:630] MountVolume.MountDevice succeeded for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") pod "cephfs-samenode-test1" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32") device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount"
Sep 28 10:26:00 minicluster2 kubelet[16776]: I0928 10:26:00.981226   16776 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="3626b84361f7cf708acf096fa285debcc60e79255e4e7def43d9dc341a889bfe"
Sep 28 10:26:00 minicluster2 kubelet[16776]: I0928 10:26:00.983895   16776 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/cephfs-samenode-test1 through plugin: invalid network status for"
Sep 28 10:26:02 minicluster2 kubelet[16776]: I0928 10:26:02.010878   16776 docker_sandbox.go:401] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/cephfs-samenode-test1 through plugin: invalid network status for"
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.021652   16776 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5tcl\" (UniqueName: \"kubernetes.io/projected/1beaabe2-b082-446f-b8cd-bfcf8d340d32-kube-api-access-t5tcl\") pod \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.023663   16776 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"test\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") pod \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\" (UID: \"1beaabe2-b082-446f-b8cd-bfcf8d340d32\") "
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.042036   16776 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1beaabe2-b082-446f-b8cd-bfcf8d340d32-kube-api-access-t5tcl" (OuterVolumeSpecName: "kube-api-access-t5tcl") pod "1beaabe2-b082-446f-b8cd-bfcf8d340d32" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32"). InnerVolumeSpecName "kube-api-access-t5tcl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.094211   16776 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1" (OuterVolumeSpecName: "test") pod "1beaabe2-b082-446f-b8cd-bfcf8d340d32" (UID: "1beaabe2-b082-446f-b8cd-bfcf8d340d32"). InnerVolumeSpecName "cephfs-samenode-test1". PluginName "kubernetes.io/csi", VolumeGidValue ""
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.126252   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.126288   16776 reconciler.go:319] "Volume detached for volume \"kube-api-access-t5tcl\" (UniqueName: \"kubernetes.io/projected/1beaabe2-b082-446f-b8cd-bfcf8d340d32-kube-api-access-t5tcl\") on node \"minicluster2\" DevicePath \"\""
Sep 28 10:26:26 minicluster2 kubelet[16776]: E0928 10:26:26.129856   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:26.629826495 +0000 UTC m=+276.411999925 (durationBeforeRetry 500ms). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.631610   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:26 minicluster2 kubelet[16776]: E0928 10:26:26.635451   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:27.635364186 +0000 UTC m=+277.417537609 (durationBeforeRetry 1s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:26 minicluster2 kubelet[16776]: I0928 10:26:26.735282   16776 scope.go:110] "RemoveContainer" containerID="46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd"
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.080542   16776 remote_runtime.go:276] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd" containerID="46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd"
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.082474   16776 kuberuntime_container.go:719] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd" pod="default/cephfs-samenode-test1" podUID=1beaabe2-b082-446f-b8cd-bfcf8d340d32 containerName="cephfs-samenode-test1" containerID="docker://46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd" gracePeriod=1
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.082663   16776 kuberuntime_container.go:744] "Kill container failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd" pod="default/cephfs-samenode-test1" podUID=1beaabe2-b082-446f-b8cd-bfcf8d340d32 containerName="cephfs-samenode-test1" containerID={Type:docker ID:46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd}
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.089120   16776 kubelet.go:1767] failed to "KillContainer" for "cephfs-samenode-test1" with KillContainerError: "rpc error: code = Unknown desc = Error response from daemon: No such container: 46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd"
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.089247   16776 pod_workers.go:765] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"cephfs-samenode-test1\" with KillContainerError: \"rpc error: code = Unknown desc = Error response from daemon: No such container: 46882bbaa0c87659232c97321f4f5aeffcf456d0086e367cd9a9c14a1e7ed4bd\"" pod="default/cephfs-samenode-test1" podUID=1beaabe2-b082-446f-b8cd-bfcf8d340d32
Sep 28 10:26:27 minicluster2 kubelet[16776]: I0928 10:26:27.098134   16776 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=1beaabe2-b082-446f-b8cd-bfcf8d340d32 path="/var/lib/kubelet/pods/1beaabe2-b082-446f-b8cd-bfcf8d340d32/volumes"
Sep 28 10:26:27 minicluster2 kubelet[16776]: I0928 10:26:27.653003   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.657453   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:29.657372319 +0000 UTC m=+279.439545736 (durationBeforeRetry 2s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:29 minicluster2 kubelet[16776]: I0928 10:26:29.677102   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:29 minicluster2 kubelet[16776]: E0928 10:26:29.681661   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:33.681633888 +0000 UTC m=+283.463807316 (durationBeforeRetry 4s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:33 minicluster2 kubelet[16776]: I0928 10:26:33.729205   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:33 minicluster2 kubelet[16776]: E0928 10:26:33.735503   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:41.735470931 +0000 UTC m=+291.517644354 (durationBeforeRetry 8s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
^C^C
$ journalctl -b -f -u kubelet.service
-- Journal begins at Tue 2021-09-28 10:10:41 UTC. --
Sep 28 10:26:27 minicluster2 kubelet[16776]: I0928 10:26:27.653003   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:27 minicluster2 kubelet[16776]: E0928 10:26:27.657453   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:29.657372319 +0000 UTC m=+279.439545736 (durationBeforeRetry 2s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:29 minicluster2 kubelet[16776]: I0928 10:26:29.677102   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:29 minicluster2 kubelet[16776]: E0928 10:26:29.681661   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:33.681633888 +0000 UTC m=+283.463807316 (durationBeforeRetry 4s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:33 minicluster2 kubelet[16776]: I0928 10:26:33.729205   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:33 minicluster2 kubelet[16776]: E0928 10:26:33.735503   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:41.735470931 +0000 UTC m=+291.517644354 (durationBeforeRetry 8s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:41 minicluster2 kubelet[16776]: I0928 10:26:41.832035   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:41 minicluster2 kubelet[16776]: E0928 10:26:41.835644   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:26:57.835608263 +0000 UTC m=+307.617781729 (durationBeforeRetry 16s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:26:57 minicluster2 kubelet[16776]: I0928 10:26:57.859893   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:26:57 minicluster2 kubelet[16776]: E0928 10:26:57.863686   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:29.863606559 +0000 UTC m=+339.645779969 (durationBeforeRetry 32s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes/kubernetes.io~csi/cephfs-samenode-test2/mount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:22 minicluster2 kubelet[16776]: I0928 10:27:22.817969   16776 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m8sl\" (UniqueName: \"kubernetes.io/projected/0f1990c7-cca0-4222-8154-2c3476bff047-kube-api-access-7m8sl\") pod \"0f1990c7-cca0-4222-8154-2c3476bff047\" (UID: \"0f1990c7-cca0-4222-8154-2c3476bff047\") "
Sep 28 10:27:22 minicluster2 kubelet[16776]: I0928 10:27:22.818326   16776 reconciler.go:196] "operationExecutor.UnmountVolume started for volume \"test\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") pod \"0f1990c7-cca0-4222-8154-2c3476bff047\" (UID: \"0f1990c7-cca0-4222-8154-2c3476bff047\") "
Sep 28 10:27:22 minicluster2 kubelet[16776]: I0928 10:27:22.838753   16776 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f1990c7-cca0-4222-8154-2c3476bff047-kube-api-access-7m8sl" (OuterVolumeSpecName: "kube-api-access-7m8sl") pod "0f1990c7-cca0-4222-8154-2c3476bff047" (UID: "0f1990c7-cca0-4222-8154-2c3476bff047"). InnerVolumeSpecName "kube-api-access-7m8sl". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 28 10:27:22 minicluster2 kubelet[16776]: I0928 10:27:22.870040   16776 operation_generator.go:866] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2" (OuterVolumeSpecName: "test") pod "0f1990c7-cca0-4222-8154-2c3476bff047" (UID: "0f1990c7-cca0-4222-8154-2c3476bff047"). InnerVolumeSpecName "cephfs-samenode-test2". PluginName "kubernetes.io/csi", VolumeGidValue ""
Sep 28 10:27:22 minicluster2 kubelet[16776]: I0928 10:27:22.920315   16776 reconciler.go:319] "Volume detached for volume \"kube-api-access-7m8sl\" (UniqueName: \"kubernetes.io/projected/0f1990c7-cca0-4222-8154-2c3476bff047-kube-api-access-7m8sl\") on node \"minicluster2\" DevicePath \"\""
Sep 28 10:27:22 minicluster2 kubelet[16776]: I0928 10:27:22.927990   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:22 minicluster2 kubelet[16776]: E0928 10:27:22.928816   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:23.428791024 +0000 UTC m=+333.210964436 (durationBeforeRetry 500ms). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:23 minicluster2 kubelet[16776]: I0928 10:27:23.400268   16776 scope.go:110] "RemoveContainer" containerID="f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4"
Sep 28 10:27:23 minicluster2 kubelet[16776]: I0928 10:27:23.457280   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:23 minicluster2 kubelet[16776]: I0928 10:27:23.474840   16776 scope.go:110] "RemoveContainer" containerID="f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4"
Sep 28 10:27:23 minicluster2 kubelet[16776]: E0928 10:27:23.476714   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:24.476687466 +0000 UTC m=+334.258860880 (durationBeforeRetry 1s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:23 minicluster2 kubelet[16776]: E0928 10:27:23.480368   16776 remote_runtime.go:334] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4" containerID="f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4"
Sep 28 10:27:23 minicluster2 kubelet[16776]: I0928 10:27:23.480784   16776 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:docker ID:f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4} err="failed to get container status \"f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4\": rpc error: code = Unknown desc = Error: No such container: f3053c165222b5d708cfe43a4cdec521a0ebe7c559c105472baf12bc1032cdb4"
Sep 28 10:27:24 minicluster2 kubelet[16776]: I0928 10:27:24.574313   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:24 minicluster2 kubelet[16776]: E0928 10:27:24.580520   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:26.580487306 +0000 UTC m=+336.362660720 (durationBeforeRetry 2s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:25 minicluster2 kubelet[16776]: I0928 10:27:25.082836   16776 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0f1990c7-cca0-4222-8154-2c3476bff047 path="/var/lib/kubelet/pods/0f1990c7-cca0-4222-8154-2c3476bff047/volumes"
Sep 28 10:27:26 minicluster2 kubelet[16776]: I0928 10:27:26.604828   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:26 minicluster2 kubelet[16776]: E0928 10:27:26.610322   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:30.610295266 +0000 UTC m=+340.392468688 (durationBeforeRetry 4s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:29 minicluster2 kubelet[16776]: I0928 10:27:29.957796   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:27:29 minicluster2 kubelet[16776]: E0928 10:27:29.980081   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:28:33.980050189 +0000 UTC m=+403.762223615 (durationBeforeRetry 1m4s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:30 minicluster2 kubelet[16776]: I0928 10:27:30.673171   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:30 minicluster2 kubelet[16776]: E0928 10:27:30.679660   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:38.679633853 +0000 UTC m=+348.461807261 (durationBeforeRetry 8s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:27:38 minicluster2 kubelet[16776]: I0928 10:27:38.682717   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:38 minicluster2 kubelet[16776]: E0928 10:27:38.689129   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:27:54.689100479 +0000 UTC m=+364.471273902 (durationBeforeRetry 16s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]

^CSep 28 10:27:54 minicluster2 kubelet[16776]: I0928 10:27:54.709265   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:27:54 minicluster2 kubelet[16776]: E0928 10:27:54.722634   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:28:26.722607734 +0000 UTC m=+396.504781157 (durationBeforeRetry 32s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:28:26 minicluster2 kubelet[16776]: I0928 10:28:26.791157   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test2\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2\") on node \"minicluster2\" "
Sep 28 10:28:26 minicluster2 kubelet[16776]: E0928 10:28:26.799166   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:29:30.7989679 +0000 UTC m=+460.581141308 (durationBeforeRetry 1m4s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test2" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test2") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount" is still mounted by other references [/mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]
Sep 28 10:28:34 minicluster2 kubelet[16776]: I0928 10:28:34.033367   16776 reconciler.go:312] "operationExecutor.UnmountDevice started for volume \"cephfs-samenode-test1\" (UniqueName: \"kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1\") on node \"minicluster2\" "
Sep 28 10:28:34 minicluster2 kubelet[16776]: E0928 10:28:34.038558   16776 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1 podName: nodeName:}" failed. No retries permitted until 2021-09-28 10:30:36.038532506 +0000 UTC m=+525.820705932 (durationBeforeRetry 2m2s). Error: GetDeviceMountRefs check failed for volume "cephfs-samenode-test1" (UniqueName: "kubernetes.io/csi/rook-ceph.cephfs.csi.ceph.com^cephfs-samenode-test1") on node "minicluster2" : the device mount path "/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount" is still mounted by other references [/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test2/globalmount /mnt/vda1/var/lib/kubelet/plugins/kubernetes.io/csi/pv/cephfs-samenode-test1/globalmount]

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 23 (15 by maintainers)

Most upvoted comments

Looking at the yaml files in https://github.com/ceph/ceph-csi/issues/2238#issuecomment-915539620, PVs cephfs-samenode-test1 and cephfs-samenode-test2 have different volumeHandles.

In CSI, volume_id (= volumeHandle in Kubernetes) “MUST contain enough information to uniquely identify this specific volume vs all other volumes supported by this plugin”. Therefore, for Kubernetes these are two separate PVs with different volumes in the backend and it treats them so. Which has some consequences.

In this particular case, two “separate” volumes “accidentally” have the same source in /proc/mounts and they hit our mount reference check in GenerateUnmountDeviceFunc - its mount refs processing finds mounts of the other volume. NodeUnstage will not be called in this case.

I am not sure how to fix it correctly. We should not do any mount reference checks and detect that all pod-local volumes are unmounted differently. It can happen that two really different CSI volumes would hit this check too. @msau42 @gnufied @jingxu97, any ideas?


As a totally separate issue, when I “fixed” the YAML files to use the same volumeHandle in two PVs, volume manager correctly recognizes the two PVs represent the same volume and calls NodeStage once. However, the PV name is encoded in the staging path: https://github.com/kubernetes/kubernetes/blob/3e6d122ee1e34ca20f9cb2fae3d36810d8dbf937/pkg/volume/csi/csi_attacher.go#L661 The second NodePublish gets a wrong staging_path, with PV name that was not staged. Here I think that volumes that return the same unique volume name from GetVolumeName should have also the same staging path.

@ADustyOldMuffin: You can’t reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

I’d like to reopen this as I’ve actively noticed this causing issues with CephFS as it keeps kernel client alive that might have locks on specific files/folders. Is there anyway we can think of to mitigate this, either on the K8s side or in ceph-csi?

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

This is reproducible when a CSI driver implements NodeStageVolume call and two different volumes can have the same source in the mount table.

  • I checked NFS and EFS CSI drivers and neither of them use NodeStageVolume.
  • CSI HostPath driver implements NodeStage, but does not mount anything to the staging dir.
  • All drivers using block volumes + ext4 / xfs are safe too, because they will have a different device as source of the mount.

So far, I found only CephFS to be affected by this issue for sure.

I created https://github.com/kubernetes/kubernetes/issues/105899 for the second issue.

Lets focus on two CSI volumes, both with the same mount source in the mount table here.

I’ve got a manifest that can reproduce this reliably using CephFS here: https://github.com/ceph/ceph-csi/issues/2238#issuecomment-915539620.