vsphere-csi-driver: fsgroup is not set on volume provided by Vmware CSI
Is this a BUG REPORT or FEATURE REQUEST?: When we are deploying a Pod with security context fstype and non-root user to access Vmware volume (PV/PVC). Fsgroup failed to assigned setgid in the files on the volumes
Uncomment only one, leave it on its own line:
/kind bug /kind feature
What happened: Pod presented with Vmware CSI’s PV/PVC , unable to fsgroup on the data volume.
What you expected to happen:
Vmware CSI’s PV/PVC should support fsgroup for less privileges Pod.
How to reproduce it (as minimally and precisely as possible):
kind: StatefulSet metadata: name: alpine-privileged labels: app: alpine-privileged spec: replicas: 1 selector: matchLabels: app: alpine-privileged template: metadata: labels: app: alpine-privileged spec: serviceAccountName: test-sa-psp securityContext: runAsUser: 1000 fsGroup: 2000 containers: - name: alpine-privileged image: alpine:3.9 command: [“sleep”, “1800”] volumeMounts: - name: data mountPath: /data securityContext: readOnlyRootFilesystem: false volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce resources: requests: storage: 1Gi storageClassName: test volumeMode: Filesystem
ccdadmin@seliicbl01481-ns2-testbed2-m1:~> kubectl -n=test-1 exec alpine-privileged-0 -it – /bin/sh / $ ls -lrth total 8 drwxr-xr-x 11 root root 125 Apr 23 13:10 var drwxr-xr-x 7 root root 66 Apr 23 13:10 usr drwxrwxrwt 2 root root 6 Apr 23 13:10 tmp drwxr-xr-x 2 root root 6 Apr 23 13:10 srv drwx------ 2 root root 6 Apr 23 13:10 root drwxr-xr-x 2 root root 6 Apr 23 13:10 opt drwxr-xr-x 2 root root 6 Apr 23 13:10 mnt drwxr-xr-x 5 root root 44 Apr 23 13:10 media drwxr-xr-x 5 root root 185 Apr 23 13:10 lib drwxr-xr-x 2 root root 6 Apr 23 13:10 home drwxr-xr-x 2 root root 4.0K Apr 23 13:10 sbin drwxr-xr-x 2 root root 4.0K Apr 23 13:10 bin dr-xr-xr-x 13 root root 0 Sep 14 11:21 sys drwxr-xr-x 1 root root 21 Sep 18 09:21 run dr-xr-xr-x 587 root root 0 Sep 18 09:21 proc drwxr-xr-x 1 root root 66 Sep 18 09:21 etc drwxr-xr-x 5 root root 360 Sep 18 09:21 dev drwxr-xr-x 3 root root 18 Sep 18 09:21 data / $ cd data/ /data $ ls demo /data $ ls -lrth total 4 drwxr-xr-x 3 root root 4.0K Sep 18 09:21 demo /data $ /data $ /data $ mkdir test mkdir: can’t create directory ‘test’: Permission denied /data $ /data $
Anything else we need to know?:
Environment:
-
csi-vsphere version: vmware/vsphere-block-csi-driver:v2.0.0
-
vsphere-cloud-controller-manager version: gcr.io/cloud-provider-vsphere/cpi/release/manager:latest
-
Kubernetes version: 1.17.3
-
vSphere version: 6.7U3
-
OS (e.g. from /etc/os-release): SUSE Linux Enterprise Server 15 SP1
-
Kernel (e.g.
uname -a): Linux master-node 4.12.14-197.45-default #1 SMP Thu Jun 4 11:06:04 UTC 2020 (2b6c749) x86_64 x86_64 x86_64 GNU/Linux -
Install tools:
-
Others:
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 3
- Comments: 26 (11 by maintainers)
Yes it would. Setting it on external-provisioner helps avoid setting it on each individual storage class. Setting it on a storage class supercedes whatever is set on the external-provisioner. Note that you need to be using external-provisioner
v2.0.0for this feature.I’ve verified both options, they work. Can you give it a try on your setup too and let me know how it goes?
Since 1.19 Kubernetes does a check here to verify whether a CSI Driver supports fsGroup: https://github.com/kubernetes/kubernetes/blob/dd466bccde8176bd390fcf712c0752ae94444742/pkg/volume/csi/csi_mounter.go#L374
The field it checks ultimately comes from the CSIDriver object’s spec. (https://kubernetes-csi.github.io/docs/csi-driver-object.html, see fsGroupPolicy field). However the default value seems OK, and according to source, seems to retain old behavior.
Could this be related to it? We were using K8S 1.19 when testing this.
@Anil-YadavK8s Do you remember what version you used?
I also encountered this issue.
Here’s the statefulset I used: stateful-fsgroup.txt
And I ran a shell with kubectl exec -it <pod name> sh to go to the mount path: cd /usr/sw/adb touch test (This fails) mkdir test (This fails)
Provisioner used: csi.vsphere.vmware.com
(Note that I am not getting this issue with Portworx, Ceph, EBS, etc. when I apply the exact same statefulset yaml)
It is expected that an unprivileged Pod running as a non-root UID can access/delete/created file/dir in the mounted PVCs when fsgroup is specified in the pod’s security context. Yet with “csi.vsphere.vmware.com” this is not the case.