lima: k8s.yaml example does not work after upgrade to Ubuntu 22.04

Description

I discovered lima yesterday and I was glad to find that k8s.yaml provides everything I need to run a disposable test instance of Kubeadm that can successfully connect to my synology-csi through their open source driver, (other solutions like k3d, kind all did not succeed in my trials).

However this morning I saw an upgrade was made from 20.04 to 22.04 ubuntu, and now etcd seems to restart for some unknown reason every 3-5 minutes and it results in cluster services dropping out and all sorts of chaos inside the cluster.

It’s easy enough to revert to the previous commit and I have done that, and everything seems to be fine again. But I thought I should report this, since I see a lot of things were upgraded at once, it is possible that nobody is aware, (or maybe this is an issue with my machine, but it seems not.)

I am using an M1 Macbook and the only changes I made to the VM config were to add:

cpus: 5
memory: 8GiB

Just from glancing at the htop statistics in the VM before things go to crap, it does not look like we’re running out of memory or anything else jumping out at me as the reason for this fault.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 15 (14 by maintainers)

Most upvoted comments

Cluster seems happier now:

NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE   IP             NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-64897985d-75xz7            1/1     Running   0          15m   10.244.0.3     lima-k8s   <none>           <none>
kube-system   coredns-64897985d-xpcd9            1/1     Running   0          15m   10.244.0.2     lima-k8s   <none>           <none>
kube-system   etcd-lima-k8s                      1/1     Running   0          15m   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-apiserver-lima-k8s            1/1     Running   0          15m   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-controller-manager-lima-k8s   1/1     Running   0          15m   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-flannel-ds-lz8xf              1/1     Running   0          15m   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-proxy-t6xks                   1/1     Running   0          15m   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-scheduler-lima-k8s            1/1     Running   0          15m   192.168.5.15   lima-k8s   <none>           <none>

It was documented in https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd

Warning: Matching the container runtime and kubelet cgroup drivers is required or otherwise the kubelet process will fail.

There seems to be something more than just changing the cgroups version, though.

NAMESPACE     NAME                               READY   STATUS             RESTARTS        AGE     IP             NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-64897985d-4rj6s            1/1     Running            2 (96s ago)     5m      10.244.0.6     lima-k8s   <none>           <none>
kube-system   coredns-64897985d-nvmxc            1/1     Running            2 (35s ago)     5m      10.244.0.7     lima-k8s   <none>           <none>
kube-system   etcd-lima-k8s                      1/1     Running            3 (2m2s ago)    5m22s   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-apiserver-lima-k8s            1/1     Running            1 (5m32s ago)   5m48s   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-controller-manager-lima-k8s   1/1     Running            3 (104s ago)    5m50s   192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-flannel-ds-stcx6              1/1     Running            0               5m      192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-proxy-klqrr                   0/1     CrashLoopBackOff   3 (24s ago)     5m      192.168.5.15   lima-k8s   <none>           <none>
kube-system   kube-scheduler-lima-k8s            1/1     Running            3 (85s ago)     5m49s   192.168.5.15   lima-k8s   <none>           <none>

But it is a start.

EDIT: Forgot that we already tried this, last time. And “systemd” is the default, since 1.22

There was similar issues with 21.10 before

Similar problems were seen with Fedora 35

https://github.com/afbjorklund/fedora-lima/blob/main/fedora-kubernetes.yaml