aws-efs-csi-driver: Failed to create access point: AccessPointLimitExceeded: You have reached the maximum number of access points

/kind bug

What happened? aws-efs-csi-driver throws and error saying that “Failed to create access point: AccessPointLimitExceeded: You have reached the maximum number of access points” after migrating to aws-efs-csi-driver version to v1.3.2

Access Points are not being deleted all PVCs in the namespace are deleted

What you expected to happen? Access Points needs ro deleted when corresponding PVC is deleted

How to reproduce it (as minimally and precisely as possible)? Install the driver using helm: helm repo add aws-efs-csi-driver https://kubernetes-sigs.github.io/aws-efs-csi-driver/ helm repo update helm upgrade --install aws-efs-csi-driver --namespace kube-system aws-efs-csi-driver/aws-efs-csi-driver

Anything else we need to know?:

Environment AWS EKS

  • Kubernetes version (use kubectl version): v1.20.4
  • Driver version: v1.3.2

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 2
  • Comments: 17 (2 by maintainers)

Most upvoted comments

@kbasv is there a way EFS CSI Driver can have a different implementation type without using the Access Points? Seems like the hard limit of 120 access points can be extremely limiting for using it in Kubernetes since that would essentially mean you’re limited to 120 pvcs right? Then you’ll need to create another EFS with another storage class to scale further… I think it defeats the purpose of the scalability aspect of EFS and will require manual intervention or using another completely different provisioner to take advantage of EFS. This is the current limitation we’ve faced for my project’s use case with using EFS CSI Driver at the moment

This is still a problem. 1000 volumes isn’t that many

Why is this closed? This can be critical.

I think this issue should be reopened. Even if we might not get an implementation that does not use an AP for each PV©, at least the APs for released PV©s should be deleted, IMO. We are currently working on a project where we deploy a lot of applications to a cluster each day and delete the ones from the day before, each having multiple EFS-backed PVCs. This leads to a lot of newly “Released” PVs each day, eating up our EFS share’s PVs. We now need to implement a workaround using a cronjob, scheduled lambda or similar mechanism to identify the no longer used APs and delete them. I think such workarounds should not be needed.