kubeadm: "kubeadm join" does not add worker node to the cluster
What keywords did you search in kubeadm issues before filing this one?
kubeadm join, error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:“1”, Minor:“8”, GitVersion:“v1.8.4”, GitCommit:“9befc2b8928a9426501d3bf62f72849d5cbcd5a3”, GitTreeState:“clean”, BuildDate:“2017-11-20T05:17:43Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
-
Kubernetes version (use
kubectl version): Client Version: version.Info{Major:“1”, Minor:“8”, GitVersion:“v1.8.4”, GitCommit:“9befc2b8928a9426501d3bf62f72849d5cbcd5a3”, GitTreeState:“clean”, BuildDate:“2017-11-20T05:28:34Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“8”, GitVersion:“v1.8.4”, GitCommit:“9befc2b8928a9426501d3bf62f72849d5cbcd5a3”, GitTreeState:“clean”, BuildDate:“2017-11-20T05:17:43Z”, GoVersion:“go1.8.3”, Compiler:“gc”, Platform:“linux/amd64”} -
Cloud provider or hardware configuration: Ubuntu 16.04.3 LTS xenial
-
OS (e.g. from /etc/os-release): NAME=“Ubuntu” VERSION=“16.04.3 LTS (Xenial Xerus)” ID=ubuntu ID_LIKE=debian PRETTY_NAME=“Ubuntu 16.04.3 LTS” VERSION_ID=“16.04” HOME_URL=“http://www.ubuntu.com/” SUPPORT_URL=“http://help.ubuntu.com/” BUG_REPORT_URL=“http://bugs.launchpad.net/ubuntu/” VERSION_CODENAME=xenial UBUNTU_CODENAME=xenial
-
Kernel (e.g.
uname -a): Linux master 4.4.0-101-generic #124-Ubuntu SMP Fri Nov 10 18:29:59 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux -
Others:
What happened?
Workers do not join the cluster
How to reproduce it (as minimally and precisely as possible)?
On Master (as root)
- sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version stable-1.8
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u)😒(id -g) $HOME/.kube/config
pod network (flannel): 5) kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 6) kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml
On worker node:
- kubeadm join --token token_id master_ip:6443 --discovery-token-ca-cert-hash discovery_token_hash
- kubectl get nodes
outputs only the master
What you expected to happen?
Expected worker to join the cluster
Anything else we need to know?
On the worker node, output from
- kubeadm join --token token_id master_ip:6443 --discovery-token-ca-cert-hash [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Running pre-flight checks [preflight] Starting the kubelet service [discovery] Trying to connect to API Server “master_ip:6443” [discovery] Created cluster-info discovery client, requesting info from “https://master_ip:6443” [discovery] Requesting info from “https://master_ip:6443” again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “master_ip:6443” [discovery] Successfully established connection with API Server “master_ip:6443” [bootstrap] Detected server version: v1.8.4 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
- Certificate signing request sent to master and response received.
- Kubelet informed of new secure connection details.
Run ‘kubectl get nodes’ on the master to see this machine join.
- sudo systemctl status kubelet kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since mån 2017-11-27 09:28:06 CET; 6s ago Docs: http://kubernetes.io/docs/ Process: 19419 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADV Main PID: 19419 (code=exited, status=1/FAILURE)
nov 27 09:28:06 master systemd[1]: kubelet.service: Unit entered failed state. nov 27 09:28:06 master systemd[1]: kubelet.service: Failed with result ‘exit-code’.
-
sudo journalctl -xeu kubelet nov 27 09:29:39 master kubelet[19616]: Flag --require-kubeconfig has been deprecated, You no longer need to use --require-kubeconfig. This will be removed in a future nov 27 09:29:39 master kubelet[19616]: I1127 09:29:39.124785 19616 feature_gate.go:156] feature gates: map[] nov 27 09:29:39 master kubelet[19616]: I1127 09:29:39.124844 19616 controller.go:114] kubelet config controller: starting controller nov 27 09:29:39 master kubelet[19616]: I1127 09:29:39.124850 19616 controller.go:118] kubelet config controller: validating combination of defaults and flags nov 27 09:29:39 master kubelet[19616]: I1127 09:29:39.137750 19616 client.go:75] Connecting to docker on unix:///var/run/docker.sock nov 27 09:29:39 master kubelet[19616]: I1127 09:29:39.137779 19616 client.go:95] Start docker client with request timeout=2m0s nov 27 09:29:39 master kubelet[19616]: W1127 09:29:39.138728 19616 cni.go:196] Unable to update cni config: No networks found in /etc/cni/net.d nov 27 09:29:39 master kubelet[19616]: I1127 09:29:39.143972 19616 feature_gate.go:156] feature gates: map[] nov 27 09:29:39 master kubelet[19616]: W1127 09:29:39.144117 19616 server.go:276] --require-kubeconfig is deprecated. Set --kubeconfig without using --require-kubec nov 27 09:29:39 master kubelet[19616]: W1127 09:29:39.144134 19616 server.go:289] --cloud-provider=auto-detect is deprecated. The desired cloud provider should be s nov 27 09:29:39 master kubelet[19616]: error: failed to run Kubelet: invalid kubeconfig: stat /etc/kubernetes/kubelet.conf: no such file or directory nov 27 09:29:39 master systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE nov 27 09:29:39 master systemd[1]: kubelet.service: Unit entered failed state. nov 27 09:29:39 master systemd[1]: kubelet.service: Failed with result ‘exit-code’.
-
Further, I checked /etc/kubernetes directory, only bootstrap-kubelet.conf exists, not kubelet.conf file.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 3
- Comments: 29 (2 by maintainers)
Had this issue, on your joining node ensure that the cgroup driver used by the kubelet is the same as the one used by Docker.
From the joining node:
From the master node:
Hope this helps, DOC
I had the same issue, Probably my master and worker node had the same host name’s, After setting the different hostname, I resolve my issue
@bart0sh I meet the same problem again. The output of
journalctl -u kubeletbelow:@bart0sh Thank you for posting the note about
--bootstrap-kubeconfig, that was a problem I was having and it took me most of the day to arrive here 😃A pre-flight check (or adding that to the default kubeadm drop-in for kubelet, or both) might be a good idea.
FTR - using kubeadm 1.9.1 on Fedora 27.