aws-efs-csi-driver: Dynamic provisioning not working for AWS EFS CSI Driver

/kind bug

What happened? I am trying to test dynamic provisioning for aws efs csi driver, but not working

What you expected to happen? PV should be created for pod PVC

How to reproduce it (as minimally and precisely as possible)? I followed steps listed on https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html

Anything else we need to know?:

> kubectl get pods          
NAME      READY   STATUS    RESTARTS   AGE
efs-app   0/1     Pending   0          7s

> kubectl get pvc
NAME        STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
efs-claim   Pending                                      efs-dummy       7s

> kubectl describe pvc efs-claim
Name:          efs-claim
Namespace:     default
StorageClass:  efs-dummy
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner: efs.csi.aws.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Used By:       efs-app
Events:
  Type    Reason                Age                    From                         Message
  ----    ------                ----                   ----                         -------
  Normal  ExternalProvisioning  2m23s (x302 over 77m)  persistentvolume-controller  waiting for a volume to be created, either  by external provisioner "efs.csi.aws.com" or manually created by system administrator

> kubectl get sc
NAME                PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
efs-dummy           efs.csi.aws.com         Delete          WaitForFirstConsumer   false                  10s
efs-sc              efs.csi.aws.com         Delete          Immediate              false                  109s

Environment Single node eks cluster All resources and nodes are in same zone I also tried with different chart versions 1.2.3/1.2.0/1.0.0

 > kubectl version --short
 Client Version: v1.19.0
 Server Version: v1.19.8-eks-96780e

Additional comments:

  • I tested Static Provisioning and its working properly
  • Are there any additional configuration required for the EKS cluster for Dynamic Provisioning?
  • All logs looks good
  • This could be possible a bug or something is missing in the documentation

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 12
  • Comments: 28 (2 by maintainers)

Most upvoted comments

I actually had the same issue but it turns out I did not specify basePath correctly.

I had this:

parameters:
  basePath: /

However, this doc states that:

Amazon EFS creates a root directory only if you have provided the CreationInfo: OwnUid, OwnGID, and permissions for the directory. If you do not provide this information, Amazon EFS does not create the root directory. If the root directory does not exist, attempts to mount using the access point will fail.

Based on that I checked the dynamic provisioning example again and noticed that basePath is not equal to /

I tried to modify my storage class’ basePath and it started to work fine

@RK-GITHUB Is your cluster private? If so, VPC endpoint for ”com.amazonaws.ap-northeast-1.elasticfilesystem” is required.

Please reopen this issue. This error is persisting with dynamic provisioning with or without basePath.

The below needs to change from below

kubectl kustomize \
    "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-" > public-ecr-driver.yaml

to

kubectl kustomize \
    "github.com/kubernetes-sigs/aws-efs-csi-driver/deploy/kubernetes/overlays/stable/?ref=master" > public-ecr-driver.yaml

However, static provisioning is working with EFS.

PS: Only when ref is changed to master static is working.

@kbasv.

As a workaround, if you plan to use the file system root as your base path, avoid passing the basePath parameter.

It worked. This is my StorageClass now without basePath being set:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: efs
provisioner: efs.csi.aws.com
mountOptions:
- tls
parameters:
  provisioningMode: efs-ap
  fileSystemId: fs-XXXXXXXX
  directoryPerms: "700"
  gidRangeStart: "1000"
  gidRangeEnd: "2000"

I installed the EFS CSI driver to mount EFS on EKS, I followed Amazon EFS CSI driver.

I’ve faced the below error while deploying PersistentVolumeClaim.

Error from server (Forbidden): error when creating "claim.yml": persistentvolumeclaims "efs-claim" is forbidden: may only update PVC status

StorageClass.yaml -->

   kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    mountOptions:
      - tls  

pv.yaml -->

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: efs-pv
    spec:
      capacity:
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      storageClassName: efs-sc
      csi:
        driver: efs.csi.aws.com
        volumeHandle: fs-xxxxxxxxxxx 

pvclaim.yaml -->

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: efs-claim
    spec:
      accessModes:
        - ReadWriteOnce
      storageClassName: efs-sc
      resources:
        requests:
          storage: 5Gi
      selector:
        matchLabels:
          name: production-environment
          role: prod 

Kindly help me to resolve this

Error “Failed to fetch File System info: Describe File System failed” when trying to provision dynamically on private EKS is fixed by my PR #585 that I submitted a week ago. Be great if someone could review\merge.