cluster-api: Can't have 3 master nodes when following 'Quick Start' in book
I followed the instruction in https://cluster-api.sigs.k8s.io/user/quick-start.html with docker infrastructure provider, at the end, I saw 1 master node and 3 worker nodes ready, but I should see 3 master nodes instead.
When I run ‘kubectl describe kubeadmcontrolplane capi-quickstart-control-plane’, I saw a warning in the end:
Events: Type Reason Age From Message
Warning ControlPlaneUnhealthy 4m6s (x1310 over 10h) kubeadm-control-plane-controller Waiting for control plane to pass preflight checks to continue reconciliation: [machine capi-quickstart-control-plane-5skgf does not have APIServerPodHealthy condition, machine capi-quickstart-control-plane-5skgf does not have ControllerManagerPodHealthy condition, machine capi-quickstart-control-plane-5skgf does not have SchedulerPodHealthy condition, machine capi-quickstart-control-plane-5skgf does not have EtcdPodHealthy condition, machine capi-quickstart-control-plane-5skgf does not have EtcdMemberHealthy condition]
Is the warning the reason that only 1 master node is up?
What did you expect to happen: I expect to see 3 master nodes ready (mandatory), and the warning is removed (optional).
Environment:
-
Cluster-api version: clusterctl version: &version.Info{Major:“0”, Minor:“3”, GitVersion:“v0.3.11”, GitCommit:“e9cf6846b6d93dedadfcf44c00357d15f5ccba64”, GitTreeState:“clean”, BuildDate:“2020-11-19T18:49:17Z”, GoVersion:“go1.13.15”, Compiler:“gc”, Platform:“linux/amd64”}
-
Minikube/KIND version: kind v0.7.0 go1.13.6 linux/amd64
-
Kubernetes version: (use
kubectl version): Client Version: version.Info{Major:“1”, Minor:“19”, GitVersion:“v1.19.4”, GitCommit:“d360454c9bcd1634cf4cc52d1867af5491dc9c5f”, GitTreeState:“clean”, BuildDate:“2020-11-11T13:17:17Z”, GoVersion:“go1.15.2”, Compiler:“gc”, Platform:“linux/amd64”} -
OS (e.g. from
/etc/os-release): NAME=“Ubuntu” VERSION=“20.04.1 LTS (Focal Fossa)”
/kind bug /area clusterctl
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 17 (9 by maintainers)
Same issue here: I could see only two docker containers newly created for the workload cluster which means that there was only one master node:
The weird thing was, after untainting the only master node, the workload cluster was functioning perfectly well. For example, I could access it and deploy workloads there.
But when checking the workload cluster, it’s not yet ready and I could see many errors like what @Insullone reported.