rancher: local cluster upgrade does not work
I edited my local
cluster and selected a newer version of Kubernetes in order to upgrade it. However, the upgrade did not happen and it’s now flashing in the cluster view as upgrading all the time. GIF:
Already tried to restart the Rancher server and the node hosting the Rancher itself but nothing helps. Also, since it’s “upgrading”, I can’t revert to an older version.
Useful | Info |
---|---|
Versions | Rancher v2.5.0 UI: v2.5.0 |
Route | undefined |
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 7
- Comments: 18
EDIT: Err I did the following (see below) and currently I can no longer access my main Rancher Cluster Overview. I can access other parts of Rancher through various URLs though. So yeah, it might be a bad idea. I see 2 problems in my logs. Fleet-system gitjob can’t be updated because another change has been in between and
[ERROR] could not convert gke config to map
- so proceed with caution.Ok I managed to sort of fix / work around it. Note: This might have unforeseen consequences down the line (maybe somebody else can judge this)…
Steps:
{"k3supgradeStrategy":{"drainServerNodes":false,"drainWorkerNodes":false,"serverConcurrency":1,"type":"/v3/schemas/clusterUpgradeStrategy","workerConcurrency":1},"kubernetesVersion":"v1.18.8+k3s1","type":"/v3/schemas/k3sConfig"}
. (Or the version your k3s version used to be, you can still see the old version on the Dashboard).After that the upgrade message will stop flashing. However where there used to be Edit cluster / View in API etc. I now see “No actions available”. I am not sure if I broke something by doing these steps, or if this was the intended behavior since 2.5.2.
Other things I noticed: If you press save on editing the cluster at step 3 without setting them to 1, it’ll refuse because it can’t be empty or 0. Which is likely why simply editing the field in the API before didn’t work, because it refused the value for my k3sConfig. So it might be you only need steps 6 through 11, but I wrote down exactly what I did to be sure.
I meet the same problem, Is there anybody know how to solve it ?
Same here, K3S v1.18.8+k3s1
@maggieliu It appears that the fix for that isn’t fool proof. I run a single node docker install as well, currently running v2.5.5 and the option to upgrade was available for me… The local cluster was running K3s v1.18.8+k3s1, tried upgrading it to 1.19.5+k3s2 as the option was there and now it’s just blinking on and off. The API shows it as:
I tried editing the k3sConfig field in the API to see if I can revert it back but it won’t let me (error 422, unprocessable entity)
Glad to hear my tinkering fixed it for you. I tested this recently in 2.5.8 and the upgrade button is no longer available.
Nope. Rancher in a single node docker install uses K3s internally to set up a cluster. K3S is basically K8S in a binary. Upgrading that (when used as an actual cluster, NOT INSIDE THIS CONTAINER*) is nothing but stopping K3s, replacing the binary and restarting K3s. In this case you don’t have to worry about that as new versions of the Rancher Docker image will automatically be bundled with newer versions of K3s and thus upgrade the kubernetes version.
* For real, don’t go replacing the binary in this container. I’ve tried doing this on an older version of Rancher, it didn’t go over well. (Mind you that was a really old version of K3s and the reason I was attempting it is because that old version didn’t have automatic certificate renewal yet and I was dealing with expired certificates and running out of options).
As a side note: It is now supported (as of Rancher 2.5.x) to migrate from a Single Node Docker installation to Rancher installed in a cluster. In order to do so you’ll have to make a backup using the rancher-backup-operator following the instructions here: https://rancher.com/docs/rancher/v2.5/en/backups/back-up-rancher/ and restore it in a new cluster using: https://rancher.com/docs/rancher/v2.5/en/backups/migrating-rancher/
I plan on doing a more elaborate blog post / article on that over at the Suse Community (https://community.suse.com) and maybe somewhere else. But that won’t be ready for a few more weeks. Couple of important notes on it though:
HostPath
turns into ‘inside the docker container’ => You can work around this by first bind mounting a directory to that directory inside the container (or get your backup out usingdocker cp
). However I didn’t want to deal with figuring out how to do that in reverse for the restore just now. Will do that for my blog post though.Had the same issue this morning in rancher 2.5.6 : i wanted to upgrade k3s local cluster as it was proposed in the cluster “edit” properties… and after that, the message “cluster is being upgraded” was blinking every about 10sec 😦
I restored from a backup (i’m using the backup/restore feature of rancher scheduled to run every 6h), and after that the message (cluster upgrade) disappeared and i was able to use rancher again normally.