examples: fsGroup securityContext does not apply to nfs mount
The example https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs works fine if the container using nfs mount is running as root user. If I use securityContext to run not as root user then I have no write access to the mounted volume.
How to reproduce: here is the nfs-busybox-rc.yaml with securityContext:
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
securityContext:
runAsUser: 10000
fsGroup: 10000
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
securityContext:
runAsUser: 10000
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
Actual result:
kubectl exec nfs-busybox-2w9bp -t -- id
uid=10000 gid=0(root) groups=10000
kubectl exec nfs-busybox-2w9bp -t -- ls -l /
total 48
<..>
drwxr-xr-x 3 root root 4096 Aug 2 12:27 mnt
Expected result: the group ownership of /mnt folder should be user 10000
The mount options in nfs pv are not allowed except rw
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.23.137.115
path: "/"
mountOptions:
# - rw // is allowed
# - root_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
# - all_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
# - anonuid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
# - anongid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.3-rancher1", GitCommit:"f6320ca7027d8244abb6216fbdb73a2b3eb2f4f9", GitTreeState:"clean", BuildDate:"2018-05-29T22:28:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 65
- Comments: 60 (3 by maintainers)
Commits related to this issue
- [manila] adding init to manila-api for nfs permissions 42424 is the default uid and gid coming from loci directly going for the workaround provided by https://github.com/kubernetes/examples/issues/2... — committed to sapcc/helm-charts by Carthaca a year ago
- [manila] adding init to manila-api for nfs permissions 42424 is the default uid and gid coming from loci directly going for the workaround provided by https://github.com/kubernetes/examples/issues/2... — committed to sapcc/helm-charts by Carthaca a year ago
- try to solve permission problems with init container. found https://github.com/kubernetes/examples/issues/260#issuecomment-534160265 from which it seems it is known at least since 2018 that fsGroup d... — committed to Berodin/palworld-helmchart by deleted user 5 months ago
- Revert "[manila] enable rabbitmq persistence" This reverts commit 5bf9d4eedf46fdfe567cecc7235e6eb6f6c9e5b7. needs init container first due to kubernetes/examples/issues/260 — committed to sapcc/helm-charts by Carthaca 4 months ago
- Revert "[manila] enable rabbitmq persistence" This reverts commit 5bf9d4eedf46fdfe567cecc7235e6eb6f6c9e5b7. needs init container first due to kubernetes/examples/issues/260 — committed to sapcc/helm-charts by Carthaca 4 months ago
- Revert "[manila] enable rabbitmq persistence" This reverts commit 5bf9d4eedf46fdfe567cecc7235e6eb6f6c9e5b7. needs init container first due to kubernetes/examples/issues/260 — committed to sapcc/helm-charts by Carthaca 4 months ago
- [rabbitmq] fix permissions if persistence is enabled `fsGroupChangePolicy: "OnRootMismatch"`does not work for NFS mounts (also see kubernetes/examples/issues/260) — committed to sapcc/helm-charts by Carthaca 4 months ago
Why did this get closed with no resolution? I have this same issue. If there is a better solution than an init container please someone fill me in.
Would love for this to be addressed! In the mean time here’s how we’re dealing with it…
In this example there are two pods that are mounting an AWS EFS volume via nfs. To enable a non-root user, we make the mount point accessible via an initContainer.
fsGroupto control accesssupplementGroupsinsteadGive me like if i saved your day
I disabled all sudo privileges from pod users for security reasons. So I can’t configure the privilege of the mount point because Kubernetes won’t let me, and I can’t chown/chmod the mount point because my pod user can’t sudo. How do I solve this problem?
+1 - facing this issue
+1 - facing this issue too!
+1
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/staleis appliedlifecycle/stalewas applied,lifecycle/rottenis appliedlifecycle/rottenwas applied, the issue is closedYou can:
/remove-lifecycle stale/lifecycle rotten/closePlease send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/staleis appliedlifecycle/stalewas applied,lifecycle/rottenis appliedlifecycle/rottenwas applied, the issue is closedYou can:
/remove-lifecycle stale/lifecycle rotten/closePlease send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
+1
Issues go stale after 90d of inactivity. Mark the issue as fresh with
/remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.If this issue is safe to close now please do so with
/close.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
@fejta-bot: Closing this issue.
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Looks like it is working for me (specifying all of runAsUser, runAsGroup and fsGroup) (version 1.24.1)
+1 - facing this issue
Issues go stale after 90d of inactivity. Mark the issue as fresh with
/remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.If this issue is safe to close now please do so with
/close.Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
The docs are not totally clear about this, but I understand that this is already the default behaviour.
The section also indicates that not every volume type necessarily supports changing permissions:
Also have this issue with permission denied. With a mongodb container nfs mounting to an EFS in AWS. Using EKS 1.24 AWS EFS https://stackoverflow.com/questions/75670387/error-executing-postinstallation-eperm-operation-not-permitted-utime-bitn
Anyone else can confirm what @ramihoudroge said ? that 1.24.1 works ?
I’ve also found this thread https://devops.stackexchange.com/questions/13939/how-to-allow-a-non-root-user-to-write-to-a-mounted-efs-in-eks which mention EFS access point. Anyone had success with this ?
/remove-lifecycle stale
/remove-lifecycle stale
Stale issues rot after 30d of inactivity. Mark the issue as fresh with
/remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.If this issue is safe to close now please do so with
/close.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Issues go stale after 90d of inactivity. Mark the issue as fresh with
/remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.If this issue is safe to close now please do so with
/close.Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
The same seems to be true for cifs mounts created through a custom volume driver: https://github.com/juliohm1978/kubernetes-cifs-volumedriver/issues/8
Edit: Looks like there is very little magic that Kubernetes does when mounting the volumes. The individual volume drivers have to respect the
fsGroupconfiguration set in the pod. Looks like the NFS provider doesn’t do that as of now.Is https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client the place where this could be fixed?
@varun-da: You can’t reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
same issue able to write but not able to read from nfs mounted volume . kubernetes shows success in mounting process but no luck .