kubeadm: JWS token not being created in cluster-info ConfigMap
Versions
kubeadm version (use kubeadm version): 1.7.0, commit d3ada0119e776222f11ec7945e6d860061339aad
Environment:
- Kubernetes version (use
kubectl version): 1.7.0, commit d3ada0119e776222f11ec7945e6d860061339aad - Cloud provider or hardware configuration: Vagrant environment being configured by https://github.com/erhudy/kubeadm-vagrant
- OS (e.g. from /etc/os-release): Xenial 16.04.2
- Kernel (e.g.
uname -a): 4.4.0-81-generic - Others: N/A
What happened?
The current version of kubeadm does not appear to be inserting the JWS token into the cluster-info ConfigMap. I tried providing it a token that I want it to use (the mode used by the Vagrantfile referenced above), and when that failed, resetting kubeadm and re-running init while allowing it to generate the token itself. Both modes failed. The consequence of this is that joining nodes to the master is not possible unless the JWS token is manually created and inserted into the cluster-info ConfigMap.
Rolling back to 1.6.6 (in the Vagrantfile, modifying the package installation line to apt-get install -y docker.io kubelet=1.6.6-00 kubeadm=1.6.6-00 kubectl=1.6.6-00 kubernetes-cni) causes everything to function as expected.
When I compared the config maps generated by 1.6.6 versus 1.7.0, the JWS key is indeed missing from 1.7.0. In 1.6.6, under the top-level data, key, there was a key beginning with jws-kubeconfig-, with its value being a JWS token. No such key exists when the cluster is bootstrapped by kubeadm 1.7.0.
What you expected to happen?
Joining workers to the master should be possible in 1.7.0 without manually editing the cluster-info ConfigMap.
How to reproduce it (as minimally and precisely as possible)?
Run the Vagrantfile from https://github.com/erhudy/kubeadm-vagrant with vagrant up. When it attempts to join the first worker, kubeadm will fail with the error message there is no JWS signed token in the cluster-info ConfigMap.
Anything else we need to know?
No.
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 1
- Comments: 38 (15 by maintainers)
Commits related to this issue
- User Kubernetes 1.6.6 in the Vagrant deployment Make sure, we don't use Kubernetes 1.7, until [1] is fixed or we know a workaround for it. [1] https://github.com/kubernetes/kubeadm/issues/335 — committed to rmohr/kubevirt by rmohr 7 years ago
- Merge pull request #48480 from liggitt/namespace-reconcile Automatic merge from submit-queue (batch tested with PRs 48480, 48353) Ensure namespace exists as part of RBAC reconciliation reconciliati... — committed to deads2k/kubernetes by deleted user 7 years ago
- vagrant: Work around controller race bug until Kubernetes 1.7.1 is released When nodes try to join a master, they can fail because cluster-info is not updated with the expected tokens. To work around... — committed to rmohr/kubevirt by rmohr 7 years ago
- vagrant: Work around controller race bug until Kubernetes 1.7.1 is released When nodes try to join a master, they can fail because cluster-info is not updated with the expected tokens. To work around... — committed to rmohr/kubevirt by rmohr 7 years ago
- vagrant: Work around controller race bug until Kubernetes 1.7.1 is released When nodes try to join a master, they can fail because cluster-info is not updated with the expected tokens. To work around... — committed to rmohr/kubevirt by rmohr 7 years ago
- kubicle: Install k8s version 1.6.7 There is a race condition in k8s 1.7.0 that prevents it from working with kubicle. Sometimes the worker nodes are unable to join the cluster. The k8s bug is here:... — committed to markdryan/ciao by deleted user 7 years ago
- kubicle: Install k8s version 1.6.7 There is a race condition in k8s 1.7.0 that prevents it from working with kubicle. Sometimes the worker nodes are unable to join the cluster. The k8s bug is here:... — committed to markdryan/ciao by deleted user 7 years ago
- Workaround kubeadm 1.7.0 race condition See https://github.com/kubernetes/kubeadm/issues/335 — committed to infraly/k8s-on-openstack by ctrlaltdel 7 years ago
- overcame the kubeadm join race condition issue with serial: 1 role https://github.com/kubernetes/kubeadm/issues/335 https://github.com/kubevirt/kubevirt/pull/292 — committed to mwalzer/k8s-galaxy-ansible by mwalzer 7 years ago
- Fix safe upgrade Even though there it kubeadm_token_ttl=0 which means that kubeadm token never expires, it is not present in `kubeadm token list` after cluster is provisioned (at least after it is ru... — committed to mlushpenko/kubespray by mlushpenko 6 years ago
- Fix safe upgrade Even though there it kubeadm_token_ttl=0 which means that kubeadm token never expires, it is not present in `kubeadm token list` after cluster is provisioned (at least after it is ru... — committed to mlushpenko/kubespray by mlushpenko 6 years ago
Fixed by https://github.com/kubernetes/kubernetes/issues/48480
@luxas same here. In case you are using kubespray, do the following to check if problem is exactly that:
On master node run this command
kubeadm token createand copy generate tokenOn worker node, edit
/etc/kubernetes/kubeadm-client.confand put your new token into token field.Then, run:
kubeadm join --config /etc/kubernetes/kubeadm-client.conf --ignore-preflight-errors=alland it shall join the cluster@vglisin That is because the token has expired. We have informed about this policy already in kubeadm v1.7 CLI output, and in the release notes: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#behavioral-changes
Note that the issue you’re describing is vastly different from the topic of this issue. That’s why I asked you to open new issues instead of commenting on old, resolved ones.
Also I want that you keep in mind that this is open source. If you find things that are sub-optimal, no one is gonna stop you from contributing a good change.
Same problem with 1.8. Any possible repair with: “ailed to connect to API Server “XXXX:6443”: there is no JWS signed token in the cluster-info ConfigMap. This token id “fb0a7d” is invalid for this cluster, can’t connect”. That same token was ok last week. Any possible, logical explantion or workaround? Next year you will have this working 100%?
gnature for token ID “w30hqq”, will try again I0114 15:02:45.146194 5300 round_trippers.go:445] GET https://10.128.0.57:80/api/v1/namespaces/kube-public/co nfigmaps/cluster-info?timeout=10s 200 OK in 7 milliseconds I0114 15:02:45.146496 5300 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS si gnature for token ID “w30hqq”, will try again I0114 15:02:51.009632 5300 round_trippers.go:445] GET https://10.128.0.57:80/api/v1/namespaces/kube-public/co nfigmaps/cluster-info?timeout=10s 200 OK in 6 milliseconds I0114 15:02:51.009999 5300 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS si gnature for token ID “w30hqq”, will try again
kubeadm version: &version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.2”, GitCommit:“faecb196815e248d3ecfb03c6 80a4507229c2a56”, GitTreeState:“clean”, BuildDate:“2021-01-13T13:25:59Z”, GoVersion:“go1.15.5”, Compiler:“gc”, P latform:“linux/amd64”}
@alexpekurovsky Yes. Meanwhile you can just kubectl apply the RoleBinding and Role