velero: RFE: option to delete & recreate objects that already exist when restoring
I set up ark 0.8.1 to make backups of my cluster, after that I was testing the restore just to make sure that ark restore will work. I got some warning and errors so I’m wondering if they are expected or I’m doing something wrong.
This is a warning, not sure why it’s failing? I’d expect to ark replace this resource even if the resource already exist, maybe a ark flag to force restore could solve this issue.
kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.
This is an error:
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
Full ark restore output
Giancarlos-MBPro:.ssh grubio$ ark restore describe logging-multiple-hostnames-20180501104707
Name: logging-multiple-hostnames-20180501104707
Namespace: heptio-ark
Labels: <none>
Annotations: <none>
Backup: logging-multiple-hostnames
Namespaces:
Included: *
Excluded: <none>
Resources:
Included: *
Excluded: nodes, events, events.events.k8s.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Phase: Completed
Validation errors: <none>
Warnings:
Ark: <none>
Cluster: not restored: persistentvolumes "pvc-138f24f1-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-13b0f8f2-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-13d14da2-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-13f6562d-431c-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-37a6990b-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-37c27b62-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-37c9b935-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
not restored: persistentvolumes "pvc-6c54e367-430e-11e8-ac10-02c73bc75f00" already exists and is different from backed up version.
Namespaces:
default: not restored: services "kubernetes" already exists and is different from backed up version.
ingress: not restored: configmaps "intern-intern" already exists and is different from backed up version.
not restored: services "ingress-nginx-ingress-intern-controller-metrics" already exists and is different from backed up version.
not restored: services "ingress-nginx-ingress-intern-controller-stats" already exists and is different from backed up version.
not restored: services "ingress-nginx-ingress-intern-controller" already exists and is different from backed up version.
not restored: services "ingress-nginx-ingress-intern-default-backend" already exists and is different from backed up version.
not restored: services "ingress-oauth-proxy" already exists and is different from backed up version.
kube-system: not restored: configmaps "cert-manager-controller" already exists and is different from backed up version.
not restored: configmaps "ingress-shim-controller" already exists and is different from backed up version.
not restored: configmaps "monitoring.v69" already exists and is different from backed up version.
not restored: endpoints "kube-controller-manager" already exists and is different from backed up version.
not restored: endpoints "kube-scheduler" already exists and is different from backed up version.
not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473820" already exists and is different from backed up version.
not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473880" already exists and is different from backed up version.
not restored: jobs.batch "kube-system-cert-manager-cronjob-1524473940" already exists and is different from backed up version.
not restored: jobs.batch "kube-system-cert-manager-cronjob-1524488340" already exists and is different from backed up version.
not restored: jobs.batch "kube-system-cert-manager-job" already exists and is different from backed up version.
not restored: services "heapster" already exists and is different from backed up version.
not restored: services "kube-dns" already exists and is different from backed up version.
not restored: services "kube-system-kubernetes-dashboard" already exists and is different from backed up version.
not restored: services "tiller-deploy" already exists and is different from backed up version.
logging: not restored: configmaps "intern-logging-intern-logging" already exists and is different from backed up version.
not restored: services "cerebro-logging-cluster" already exists and is different from backed up version.
not restored: services "elasticsearch-discovery-logging-cluster" already exists and is different from backed up version.
not restored: services "elasticsearch-logging-cluster" already exists and is different from backed up version.
not restored: services "es-data-svc-logging-cluster" already exists and is different from backed up version.
not restored: services "kibana-logging-cluster" already exists and is different from backed up version.
not restored: services "logging-nginx-ingressintern-controller-metrics" already exists and is different from backed up version.
not restored: services "logging-nginx-ingressintern-controller-stats" already exists and is different from backed up version.
not restored: services "logging-nginx-ingressintern-controller" already exists and is different from backed up version.
not restored: services "logging-nginx-ingressintern-default-backend" already exists and is different from backed up version.
monitoring: not restored: configmaps "monitoring-kube-prometheus" already exists and is different from backed up version.
not restored: endpoints "alertmanager-operated" already exists and is different from backed up version.
not restored: endpoints "prometheus-operated" already exists and is different from backed up version.
not restored: services "monitoring-prometheus-pushgateway" already exists and is different from backed up version.
Errors:
Ark: <none>
Cluster: <none>
Namespaces:
kube-system: error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "69f1831d34b8a772e16fe4b53dfde156": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "2f971a1dcd6eb045c364011a4cd3eb0b": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-events-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "etcd-server-events-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "a78c3a37fa41e2979affd20e9b8e0111": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1e7be17cb58e298472eb0bcf5529d4ca": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "7b2a70d4cf5b688ab13ddbe564ef527e": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/etcd-server-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "etcd-server-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "0e92292cb0f619d5a229297600d7bb97": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-apiserver-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-apiserver-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d454a354dcb2cb12783fa49f2386b6ba": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-controller-manager-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-controller-manager-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "1526b1178ede071d84be82486333151e": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-103-41.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-103-41.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "5963c325107b331ab635aad75b94927b": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "bac2cc1636847764a0815d26720c8cd7": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-107-213.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-107-213.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "377aa0ca81598973093dac679d794bba": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-68-173.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-68-173.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "4b96cd34114ce182fb895b5851df1076": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-71-34.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-71-34.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "f97f3000e965824d1fbf2f5e271c5dcb": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "5092b3704cad1cae1ba58baa1f89c044": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-81-61.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-81-61.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "41e604c2a05ff59d4ca71eae2650b77b": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-82-127.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-82-127.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "8ad729bc65359d65c67211a9c8cad910": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-proxy-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-proxy-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "92a7e3e865f9d8fefcc21e84377b4f40": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-105-102.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-105-102.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-79-139.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-79-139.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
error restoring /tmp/318897152/resources/pods/namespaces/kube-system/kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal.json: Pod "kube-scheduler-ip-10-50-84-142.eu-west-1.compute.internal" is invalid: metadata.annotations[kubernetes.io/config.mirror]: Invalid value: "d5ec5961f20e838394c13c9314b9d39d": must set spec.nodeName if mirror pod annotation is set
Giancarlos-MBPro:.ssh grubio$
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 12
- Comments: 43 (33 by maintainers)
Still important I guess?
Yes, the flow would be
I also don’t think we’d ever want to delete a PV or PVC. We have another issue open for cloning preexisting PVs into a cluster (#192). We’ll need to make sure we special case things like PVs/PVCs here.
User story:
As a cluster operator, I want to use Ark as a mechanism to keep two clusters in sync. This might be Prod A and Prod B, or alternatively every night mirror Production to Staging so that we have a fresh environment for testing/staging.
For stateless apps, this sounds like a healthy feature for us to add. I agree with Andy that we probably don’t want to delete PV/PVC by default.
That said, if the use-case is mirroring Production to Staging, I don’t want to keep around my old staging PV/PVCs. Perhaps we need another CLI flag for PV/PVC specifically?
--conflict-strategy-volumes
?Should we repurpose this issue as “RFE: option to delete & recreate objects that already exist when restoring”?
@archmangler yes, you could first manually delete the objects that you want to restore. It’s possible you’d run into issues where you couldn’t delete the PV/PVC because they were being used by a pod, though.
I’m thinking maybe something like
--conflict-strategy
with optionsreplace
(delete what’s in the cluster and create what’s in the backup),preserve
(keep what’s in the cluster and record a warning as we’re doing now). (All names TBD)