kubernetes: [API] PVC stuck in pending for NFS/EFS storage class. Corresponding PV is not automatically created.

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug /sig aws

What happened: I have created a PVC for a EFS volume, with ConfigMap in place.

{
  "kind": "PersistentVolumeClaim",
  "apiVersion": "v1",
  "metadata": {
    "name": "efs",
    "namespace": "default",
    "annotations": {
      "volume.beta.kubernetes.io/storage-class": "aws-efs"
    }
  },
  "spec": {
    "accessModes": [
      "ReadWriteMany"
    ],
    "resources": {
      "requests": {
        "storage": "1Gi"
      }
    }
  }
}

aws-efs StorageClass is available:

# kubectl get sc
NAME      PROVISIONER           AGE
aws-efs   example.com/aws-efs   1d

However the PVC creation is stuck in state pending. kubectl status of PVC:

# kubectl describe pvc efs
Name:          efs
Namespace:     default
StorageClass:  aws-efs
Status:        Pending
Volume:
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-class=aws-efs
               volume.beta.kubernetes.io/storage-provisioner=example.com/aws-efs
Finalizers:    []
Capacity:
Access Modes:
Events:
  Type    Reason                Age                From                         Message
  ----    ------                ----               ----                         -------
  Normal  ExternalProvisioning  1m (x62 over 16m)  persistentvolume-controller  waiting for a volume to be created, either by external provisioner "example.com/aws-efs" or manually created by system administrator

What you expected to happen: Corresponding PV should have been automatically created. Since the StorageClass annotation has been used.

How to reproduce it (as minimally and precisely as possible): Follow steps on EFS PVC creation, with a valid EFS mount details.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.3”, GitCommit:“d2835416544f298c919e2ead3be3d0864b52323b”, GitTreeState:“clean”, BuildDate:“2018-02-07T12:22:21Z”, GoVersion:“go1.9.2”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“9”, GitVersion:“v1.9.0”, GitCommit:“925c127ec6b946659ad0fd596fa959be43f0cc05”, GitTreeState:“clean”, BuildDate:“2017-12-15T20:55:30Z”, GoVersion:“go1.9.2”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release): CentOS Linux 7
  • Kernel (e.g. uname -a): 3.10.0-693.21.1.el7
  • Install tools:
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 8
  • Comments: 22

Most upvoted comments

@tonybranfort provided the right fix. In summary I had to update my deployment and add ClusterRole, ClusterRoleBinding, Role, RoleBinding, ServiceAccount. The deployment template provided was expecting a directory on my efs volume called /persistentvolumes. I had to replace this with just / for the pod to start.

See below for configuration I used:

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: efs-provisioner
data:
  file.system.id: xxxxxxxxxxxxxx
  aws.region: xxxxxxxxxxxxx
  provisioner.name: example.com/aws-efs
  dns.name: ""
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: efs-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate 
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      serviceAccount: efs-provisioner
      containers:
        - name: efs-provisioner
          image: quay.io/external_storage/efs-provisioner:latest
          env:
            - name: FILE_SYSTEM_ID
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: file.system.id
            - name: AWS_REGION
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: aws.region
            - name: DNS_NAME
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: dns.name
                  optional: true
            - name: PROVISIONER_NAME
              valueFrom:
                configMapKeyRef:
                  name: efs-provisioner
                  key: provisioner.name
          volumeMounts:
            - name: pv-volume
              mountPath: /persistentvolumes
      volumes:
        - name: pv-volume
          nfs:
            server: xxxxxxxxxxxxxxx.efs.xxxxxxxxxxx.amazonaws.com
            path: /
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: aws-efs
provisioner: example.com/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: efs
  annotations:
    volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: efs-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-efs-provisioner
subjects:
  - kind: ServiceAccount
    name: efs-provisioner
     # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: efs-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
subjects:
  - kind: ServiceAccount
    name: efs-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-efs-provisioner
  apiGroup: rbac.authorization.k8s.io
---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: efs-provisioner

If you’re using rbac (which it seems kops does by default), make sure you’ve created a serviceAccount. Look at the deployment.yaml rather than the manifest.yaml.

You’ll also need rbac.yaml. The readme describes this but doesn’t mention the service account.

Try kubectl describe pvc <pvc-name> and see if there is error I found that something does not support ReadWriteMany try ReadWriteOnce

@saadzaman

Thank you very much for the suggestion: ReadWriteOnce work me as well.

EFS or NFS should support ReadWriteMany. That’s the whole point of using them!

/sig storage

please help my efs-provisioner is showing containercreating