microk8s: long latency node join 5 nodes cluster failure

I tried to join a node to my existing cluster from a different region which has 100ms latency. The API server of the new node keeps restarting.

But, if I join another empty one node cluster in the same place with the 5 nodes cluster. It could work.

It seems the API server waited too short time before the deadline is reached?

Feb 19 05:01:15 de1 microk8s.daemon-apiserver[13747]: W0219 05:01:15.220327   13747 authentication.go:519] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
Feb 19 05:02:15 de1 microk8s.daemon-apiserver[13747]: Error: context deadline exceeded
Feb 19 05:02:15 de1 systemd[1]: snap.microk8s.daemon-apiserver.service: Main process exited, code=exited, status=1/FAILURE
Feb 19 05:02:15 de1 systemd[1]: snap.microk8s.daemon-apiserver.service: Failed with result 'exit-code'.
Feb 19 05:02:15 de1 systemd[1]: snap.microk8s.daemon-apiserver.service: Scheduled restart job, restart counter is at 10.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 21

Most upvoted comments

Aaa you’re right. It does a kubectl get no.

You can actually forcefully remove it from dqlite with this command.

snap/microk8s/current/bin/dqlite -s file:///var/snap/microk8s/current/var/kubernetes/backend/cluster.yaml -c /var/snap/microk8s/current/var/kubernetes/backend/cluster.crt -k /var/snap/microk8s/current/var/kubernetes/backend/cluster.key -f json k8s ".remove <node-ip-with-port-19001>"