kubeadm: JWS token not being created in cluster-info ConfigMap

Versions

kubeadm version (use kubeadm version): 1.7.0, commit d3ada0119e776222f11ec7945e6d860061339aad

Environment:

  • Kubernetes version (use kubectl version): 1.7.0, commit d3ada0119e776222f11ec7945e6d860061339aad
  • Cloud provider or hardware configuration: Vagrant environment being configured by https://github.com/erhudy/kubeadm-vagrant
  • OS (e.g. from /etc/os-release): Xenial 16.04.2
  • Kernel (e.g. uname -a): 4.4.0-81-generic
  • Others: N/A

What happened?

The current version of kubeadm does not appear to be inserting the JWS token into the cluster-info ConfigMap. I tried providing it a token that I want it to use (the mode used by the Vagrantfile referenced above), and when that failed, resetting kubeadm and re-running init while allowing it to generate the token itself. Both modes failed. The consequence of this is that joining nodes to the master is not possible unless the JWS token is manually created and inserted into the cluster-info ConfigMap.

Rolling back to 1.6.6 (in the Vagrantfile, modifying the package installation line to apt-get install -y docker.io kubelet=1.6.6-00 kubeadm=1.6.6-00 kubectl=1.6.6-00 kubernetes-cni) causes everything to function as expected.

When I compared the config maps generated by 1.6.6 versus 1.7.0, the JWS key is indeed missing from 1.7.0. In 1.6.6, under the top-level data, key, there was a key beginning with jws-kubeconfig-, with its value being a JWS token. No such key exists when the cluster is bootstrapped by kubeadm 1.7.0.

What you expected to happen?

Joining workers to the master should be possible in 1.7.0 without manually editing the cluster-info ConfigMap.

How to reproduce it (as minimally and precisely as possible)?

Run the Vagrantfile from https://github.com/erhudy/kubeadm-vagrant with vagrant up. When it attempts to join the first worker, kubeadm will fail with the error message there is no JWS signed token in the cluster-info ConfigMap.

Anything else we need to know?

No.

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Reactions: 1
  • Comments: 38 (15 by maintainers)

Commits related to this issue

Most upvoted comments

@luxas same here. In case you are using kubespray, do the following to check if problem is exactly that:

On master node run this command

kubeadm token create and copy generate token

On worker node, edit /etc/kubernetes/kubeadm-client.conf and put your new token into token field.

Then, run: kubeadm join --config /etc/kubernetes/kubeadm-client.conf --ignore-preflight-errors=all and it shall join the cluster

@vglisin That is because the token has expired. We have informed about this policy already in kubeadm v1.7 CLI output, and in the release notes: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.8.md#behavioral-changes

The default Bootstrap Token created with kubeadm init v1.8 expires and is deleted after 24 hours by default to limit the exposure of the valuable credential. You can create a new Bootstrap Token with kubeadm token create or make the default token permanently valid by specifying --token-ttl 0 to kubeadm init. The default token can later be deleted with kubeadm token delete.

Note that the issue you’re describing is vastly different from the topic of this issue. That’s why I asked you to open new issues instead of commenting on old, resolved ones.

Also I want that you keep in mind that this is open source. If you find things that are sub-optimal, no one is gonna stop you from contributing a good change.

Same problem with 1.8. Any possible repair with: “ailed to connect to API Server “XXXX:6443”: there is no JWS signed token in the cluster-info ConfigMap. This token id “fb0a7d” is invalid for this cluster, can’t connect”. That same token was ok last week. Any possible, logical explantion or workaround? Next year you will have this working 100%?

gnature for token ID “w30hqq”, will try again I0114 15:02:45.146194 5300 round_trippers.go:445] GET https://10.128.0.57:80/api/v1/namespaces/kube-public/co nfigmaps/cluster-info?timeout=10s 200 OK in 7 milliseconds I0114 15:02:45.146496 5300 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS si gnature for token ID “w30hqq”, will try again I0114 15:02:51.009632 5300 round_trippers.go:445] GET https://10.128.0.57:80/api/v1/namespaces/kube-public/co nfigmaps/cluster-info?timeout=10s 200 OK in 6 milliseconds I0114 15:02:51.009999 5300 token.go:221] [discovery] The cluster-info ConfigMap does not yet contain a JWS si gnature for token ID “w30hqq”, will try again

kubeadm version: &version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.2”, GitCommit:“faecb196815e248d3ecfb03c6 80a4507229c2a56”, GitTreeState:“clean”, BuildDate:“2021-01-13T13:25:59Z”, GoVersion:“go1.15.5”, Compiler:“gc”, P latform:“linux/amd64”}

@alexpekurovsky Yes. Meanwhile you can just kubectl apply the RoleBinding and Role