rancher: Local Snapshot restoration fails in Rancher downstream RKE2 clusters

Summary: The system agent appears to have similar behavior to https://github.com/rancher/system-agent/issues/49 when I attempt to take a snapshot - I will see an attempt to restore from snapshot in the agent logs that will fail, and it will seemingly infinitely keep trying restoration. Note that both local and s3 restoration fail, but this issue is specific to local. s3 may have slightly different behavior but also does not work.

Environment: Rancher version: v2.6-head f5761521d pulled at 8/20/21 9:25 am Pacific Rancher cluster type: single-node docker Docker version: 20.10

Downstream cluster type: rke2 Downstream K8s version: v1.21.3-rc6+rke2r2

Steps to Reproduce:

  1. In rancher, provision any RKE2 cluster - such as with any node driver or a custom cluster. 1-node cluster is fine. Use default settings for the cluster.
  2. When the cluster is fully ready, take a snapshot.
  3. After the snapshot is fully ready/done, try to restore

Expected Result:

Actual Result: You will see an error in the Rancher UI after a minute or two after restoration: image

The rancher-system-agent logs seem to loop trying restore again and again. The logs mention a stacktrace after a while (could take 15+minutes) and a pod is orphaned (kubelet I believe based on what I see in the logs).

stack trace logs snippet:

Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: E0820 19:09:44.362892   32267 runtime.go:78] Observed a panic: \"invalid memory address or nil pointer dereference\" (runtime error: invalid memory address or nil pointer dereference)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: goroutine 231 [running]:"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/apimachinery/pkg/util/runtime.logPanic(0x43608a0, 0x724f430)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x95"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x86"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: panic(0x43608a0, 0x724f430)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/usr/local/go/src/runtime/panic.go:965 +0x1b9"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).updateDevicePath(0xc00013dc80, 0xc001683c88)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:598 +0x41"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).updateStates(0xc00013dc80, 0xc001683c88, 0xc0003504c0, 0x40)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:629 +0x50"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).syncStates(0xc00013dc80)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:434 +0x3d7"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).sync(0xc00013dc80)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:348 +0x4e"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).reconciliationLoopFunc.func1()"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:158 +0x118"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000d9c200)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000d9c200, 0x5326a00, 0xc000cf3ad0, 0x44fba01, 0xc0000a40c0)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0x9b"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000d9c200, 0x5f5e100, 0x0, 0x1, 0xc0000a40c0)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/apimachinery/pkg/util/wait.Until(...)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler.(*reconciler).Run(0xc00013dc80, 0xc0000a40c0)"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:146 +0x5f"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: created by k8s.io/kubernetes/pkg/kubelet/volumemanager.(*volumeManager).Run"
Aug 20 19:09:44 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:44Z" level=info msg="[stderr]: \t/go/src/kubernetes/pkg/kubelet/volumemanager/volume_manager.go:272 +0x1df"

errors and orphan pod mentioned logs snippet:

Aug 20 19:09:46 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:46Z" level=info msg="[stderr]: E0820 19:09:46.232555   32267 kubelet_volumes.go:179] \"There were many similar errors. Turn up verbosity to see them.\" err=\"orphaned pod \\\"d06e4af3-2421-491a-ad02-ee5b5745df80\\\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/d06e4af3-2421-491a-ad02-ee5b5745df80/volumes/kubernetes.io~configmap/config-volume: directory not empty\" numErrs=3"
Aug 20 19:09:47 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:47Z" level=info msg="[stderr]: time=\"2021-08-20T19:09:47Z\" level=info msg=\"Static pod cleanup completed successfully\""
Aug 20 19:09:47 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:47Z" level=info msg="[stderr]: time=\"2021-08-20T19:09:47Z\" level=info msg=\"Starting rke2 v1.21.3-rc6+rke2r2 (0a58cb4f9b947dae97f06130bf6764789d93994c)\""
Aug 20 19:09:47 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:47Z" level=info msg="[stderr]: time=\"2021-08-20T19:09:47Z\" level=info msg=\"Managed etcd cluster bootstrap already complete and initialized\""
Aug 20 19:09:47 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:47Z" level=info msg="[stderr]: time=\"2021-08-20T19:09:47Z\" level=info msg=\"Checking if S3 bucket  exists\""
Aug 20 19:09:47 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:47Z" level=info msg="[stderr]: time=\"2021-08-20T19:09:47Z\" level=fatal msg=\"starting kubernetes: preparing server: start managed database: Bucket name cannot be empty\""
Aug 20 19:09:47 dave-tryrestoreagain-pool1-16cb527c-ghj48 rancher-system-agent[11819]: time="2021-08-20T19:09:47Z" level=error msg="error syncing 'fleet-default/dave-tryrestoreagain-bootstrap-template-qpg4g-machine-plan': handler secret-watch: error applying plan: error executing instruction 4: exit status 1, requeuing"

Additional Info: Full rancher-system-agent logs (snapshot first mentioned on line 6007, first cluster restore happens (only time user asks) at line 11935) rancher-system-agent1.log

Full rke2-server logs (I didn’t find anything useful in these logs) rke2-server1.log

Please let me know if I can assist in any way, such as by providing a Rancher server / rke2 downstream cluster to reproduce with or additional logs, etc

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 16 (12 by maintainers)

Most upvoted comments

Just putting this here that this is a duplicate https://github.com/rancher/system-agent/issues/49 issue that I believe @Oats87 is working on in system-agent