vsphere-csi-driver: Cluster installed with CSI does not detach disks on worker nodes on kubectl drain
/kind bug
What happened: On running kubectl drain node, with CSI provisioned PVC workloads,
Disks are unmounted but are still attached. Refer to lsblk log below
What you expected to happen: Disks should have been unmounted and detached from the host node
How to reproduce it (as minimally and precisely as possible): Create a k8s cluster Install vSphere CSI for PV claims Create a stateful workload using CSI storage class Kubectl drain any node SSH to worker node-lsblk/ Observe in VC PVC disks are still attached
- This used to work in in-tree VCP. Is the deviation from in-tree behavior intentional? or is this a bug?
Anything else we need to know?:
Environment:
- csi-vsphere version:
./vsphere-csi-node-ds.yaml:21: image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
./vsphere-csi-node-ds.yaml:43: image: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.2
./vsphere-csi-node-ds.yaml:44: imagePullPolicy: "Always"
./vsphere-csi-node-ds.yaml:91: image: quay.io/k8scsi/livenessprobe:v1.1.0
./vsphere-csi-controller-ss.yaml:24: image: quay.io/k8scsi/csi-attacher:v1.1.1
./vsphere-csi-controller-ss.yaml:36: image: gcr.io/cloud-provider-vsphere/csi/release/driver:v1.0.2
./vsphere-csi-controller-ss.yaml:43: imagePullPolicy: "Always"
./vsphere-csi-controller-ss.yaml:70: image: quay.io/k8scsi/livenessprobe:v1.1.0
./vsphere-csi-controller-ss.yaml:80: image: gcr.io/cloud-provider-vsphere/csi/release/syncer:v1.0.2
./vsphere-csi-controller-ss.yaml:83: imagePullPolicy: "Always"
./vsphere-csi-controller-ss.yaml:94: image: quay.io/k8scsi/csi-provisioner:v1.2.2
- vsphere-cloud-controller-manager version: Deployed with in-tree provider on PKS k8s cluster.
- Kubernetes version: 1.16.7
- vSphere version: 6.7 U3
- OS (e.g. from /etc/os-release): Ubuntu Xenial
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
Before drain, 2 PVC attached and mounted as seen below
#### WORKER 1 #####
Two pod disks seen with sdd and sde
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 20G 0 disk /var/vcap/data/kubelet/pods/739d63e9-a10b-44de-8e47-db3dc269a6d4/volumes/kubernetes.io~csi/pvc-1d8360db-92a7-4b9d-b87a-ce648459e957/moun
sdb 8:16 0 32G 0 disk
├─sdb2 8:18 0 28.1G 0 part /var/vcap/data
└─sdb1 8:17 0 3.9G 0 part [SWAP]
sr0 11:0 1 50K 0 rom
sde 8:64 0 20G 0 disk /var/vcap/data/kubelet/pods/384c2364-2d4b-4875-8023-299b13590590/volumes/kubernetes.io~csi/pvc-2479f9bf-fccd-4bb5-ab9a-11d8039a66a1/moun
sdc 8:32 0 20G 0 disk
└─sdc1 8:33 0 20G 0 part /var/vcap/store
sda 8:0 0 3G 0 disk
└─sda1 8:1 0 3G 0 part /
After drain, disks are still attached but mounts have disappeared.
########### Worker 1 ##########
worker/3151aacc-5d14-466d-97c0-59a7540014f7:/var/vcap/bosh_ssh/bosh_dd21a632f8c8421# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdd 8:48 0 20G 0 disk
sdb 8:16 0 32G 0 disk
├─sdb2 8:18 0 28.1G 0 part /var/vcap/data
└─sdb1 8:17 0 3.9G 0 part [SWAP]
sr0 11:0 1 50K 0 rom
sde 8:64 0 20G 0 disk
sdc 8:32 0 20G 0 disk
└─sdc1 8:33 0 20G 0 part /var/vcap/store
sda 8:0 0 3G 0 disk
└─sda1 8:1 0 3G 0 part /
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 1
- Comments: 21 (11 by maintainers)
@luwang-vmware as @RaunakShah mentioned, you may not be using the right arguments or
rbac
rules may be missing. Can you compare YAMLs with following@EleanorRigby see #315 which may be why you saw this. I do see disks get detached but a lot depends on the health of the single replica controller.