velero: ARK backup failed with efs provisioner

I used efs provisioner for creating PV for PO and ark can not backup PV.

This is yaml file to create nginx with efs pv.

#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

---
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-example-efs
  labels:
    app: nginx-example-efs

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: nginx-logs
  namespace: nginx-example-efs
  labels:
    app: nginx-example-efs
spec:
  storageClassName: aws-efs-2
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: nginx-example-efs
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx-example-efs
    spec:
      volumes:
        - name: nginx-logs
          persistentVolumeClaim:
           claimName: nginx-logs
      containers:
      - image: nginx:1.7.9
        name: nginx
        ports:
        - containerPort: 80
        volumeMounts:
          - mountPath: "/var/log/nginx"
            name: nginx-logs
            readOnly: false
      tolerations:
        - key: "type"
          effect: "NoSchedule"
          value: "MEM"

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-example-efs
  name: my-nginx
  namespace: nginx-example-efs
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx-example-efs
  type: ClusterIP

Log:

Name:         nginx-example-efs
Namespace:    heptio-ark
Labels:       <none>
Annotations:  <none>

Namespaces:
  Included:  *
  Excluded:  <none>

Resources:
  Included:        *
  Excluded:        <none>
  Cluster-scoped:  auto

Label selector:  app=nginx-example-efs

Snapshot PVs:  auto

TTL:  720h0m0s

Hooks:  <none>

Phase:  Completed

Backup Format Version:  1

Expiration:  2018-07-25 14:42:49 +0700 +07

Validation errors:  <none>

Persistent Volumes: <none included>

time="2018-06-25T07:42:50Z" level=info msg="PersistentVolume is not a supported volume type for snapshots, skipping." backup=heptio-ark/nginx-example-efs group=v1 groupResource=persistentvolumeclaims logSource="pkg/backup/item_backupper.go:307" name=pvc-b33e8de0-784a-11e8-957d-12dd8b001c9e namespace=nginx-example

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 18 (9 by maintainers)

Most upvoted comments

There are 29 instances of the restic pod (1 per node, and the output above shows that you have 29 nodes). It’s possible you’re looking at the logs from one of the older pods, before you set the image tag correctly. Please examine one of the new pods (created most recently) and confirm that its tag is correct, and then check to see if it’s running / look at the logs.

It should work

On Tue, Jun 26, 2018 at 7:21 AM RAI notifications@github.com wrote:

I mean in this yaml :

kind: DaemonSet metadata: name: restic namespace: heptio-ark spec: selector: matchLabels: name: restic template: metadata: labels: name: restic spec: serviceAccountName: ark securityContext: runAsUser: 0 volumes: - name: cloud-credentials secret: secretName: cloud-credentials - name: host-pods hostPath: path: /var/lib/kubelet/pods - name: scratch emptyDir: {} containers: - name: ark image: gcr.io/heptio-images/ark:latest command: - /ark args: - restic - server volumeMounts: - name: cloud-credentials mountPath: /credentials - name: host-pods mountPath: /host_pods mountPropagation: HostToContainer - name: scratch mountPath: /scratch env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName - name: HEPTIO_ARK_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: AWS_SHARED_CREDENTIALS_FILE value: /credentials/cloud - name: ARK_SCRATCH_DIR value: /scratch

I will remove all aws cloud-credentials and add annotation to use IAM role ( use kube2iam ). Will it work ?

— You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub https://github.com/heptio/ark/issues/579#issuecomment-400272480, or mute the thread https://github.com/notifications/unsubscribe-auth/AAABYqIruQC9TwNG9PslDRxyVaXbyaDtks5uAhlQgaJpZM4U1ush .