aws-efs-csi-driver: MountVolume.SetUp failed for volume "efs-pv"

/kind bug

Hello,

I followed this documentation : https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html, as written in the userguide i’m trying to use the Multiple Pods Read Write Many

I have an EKS cluster, and i have an issue with my pod creation, apparently he is unable to mount the EFS Volume.

here is some log that i found :

MountVolume.SetUp failed for volume "efs-pv" : kubernetes.io/csi: mounter.SetupAt failed: rpc error: code = Internal desc = Could not mount "fs-xxxxxxx:/" at "/var/lib/kubelet/pods/eec7379e-0d59-440d-87c3-050c59c27b45/volumes/kubernetes.io~csi/efs-pv/mount": mount failed: exit status 1 Mounting command: mount Mounting arguments: -t efs fs-xxxxxxx:/ /var/lib/kubelet/pods/eec7379e-0d59-440d-87c3-050c59c27b45/volumes/kubernetes.io~csi/efs-pv/mount Output: Traceback (most recent call last): File "/sbin/mount.efs", line 1375, in <module> main() File "/sbin/mount.efs", line 1355, in main bootstrap_logging(config) File "/sbin/mount.efs", line 1031, in bootstrap_logging raw_level = config.get(CONFIG_SECTION, 'logging_level') File "/lib64/python2.7/ConfigParser.py", line 607, in get raise NoSectionError(section) ConfigParser.NoSectionError: No section: 'mount'

and

Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage default-token-xvnhr]: timed out waiting for the condition

I checked many things, my SG are fully open in and out, my pv.yaml, pod1.yaml, classtorage.yml and claim.yaml are exactly the same as here https://github.com/kubernetes-sigs/aws-efs-csi-driver/tree/master/examples/kubernetes/multiple_pods

Environment*

  • Client Version: version.Info{Major:“1”, Minor:“16+”, GitVersion:“v1.16.6-beta.0”, GitCommit:“e7f962ba86f4ce7033828210ca3556393c377bcc”, GitTreeState:“clean”, BuildDate:“2020-01-15T08:26:26Z”, GoVersion:“go1.13.5”, Compiler:“gc”, Platform:“darwin/amd64”} Server Version: version.Info{Major:“1”, Minor:“16+”, GitVersion:“v1.16.8-eks-e16311”, GitCommit:“e163110a04dcb2f39c3325af96d019b4925419eb”, GitTreeState:“clean”, BuildDate:“2020-03-27T22:37:12Z”, GoVersion:“go1.13.8”, Compiler:“gc”, Platform:“linux/amd64”}

Driver version: I guess latest, i installed it today following the doc

If you have any idea or recommendation it would be very nice, the userguide look so simple i’m frustrated that i’m not able to make it work and i can’t see what i did wrong.

Thanks in advance

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 7
  • Comments: 34 (15 by maintainers)

Commits related to this issue

Most upvoted comments

Just for anyone running into this issue, if I can help save you some debugging time. I saw this issue when I used the incorrect Security Groups for my EFS filesystem. Worth double checking this. The SG applied to each mount target should be the one that “Allows inbound NFS traffic from within the VPC” which is created in these steps: https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html

@nomopo45 Since #202, latest isn’t actually latest anymore. You may be bouncing off of #196.

From now until we have a new release, I would suggest keeping your YAML and image tag in sync. For example, the current tip of the master branch is at commit 778131e6fdfce466bbada3fb08b1e5bdd50c072b, which corresponds to image tag 778131e. I do this in my operator by locking the YAML from that commit to that image tag.

@nmtulloch27 I can’t help myself! I took a look and brought both options kustomize build... and kubectl apply -k into line with one another, I think. Can you give it a go? I think it’s probably just best if you clone github.com/ossareh/aws-efs-csi-driver check out the fix_kustomize_dev_overlay branch and run the kubectl apply -k from there.

Haha nice, I’ll will do that right now! This is what i see now: “error: error validating “aws-efs-csi-driver/deploy/kubernetes/overlays/dev/”: error validating data: ValidationError(DaemonSet.spec.template.spec.containers): invalid type for io.k8s.api.core.v1.PodSpec.containers: got “map”, expected “array”; if you choose to ignore these errors, turn validation off with --validate=false”

Did you see this as well?

Its complaining about the latest_image file… under containers maybe efs-plugin: should be - name: efs-plugin – yup it ran after i changed it to that.

@noamran

You might be able to solve this problem by using an updated image of amazon/aws-efs-csi-driver, as shown in the stackoverflow article below.

https://stackoverflow.com/questions/62447132/mounting-efs-in-eks-cluster-example-deployment-fails