kubeadm: Token not being added to configmap after kubeadm token create
What keywords did you search in kubeadm issues before filing this one?
Through Google, I found ticket #668. This issue is somewhat similar to #668, though I believe that there may be a different root-cause. I do not know the root cause of that issue, so I am opening a narrower issue. This also seems similar to 1988, though all the machines I am using are persistent.
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use kubeadm version
):
> kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Kubernetes version (use
kubectl version
):
> kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration: VM - assume a standard x86 host? Maybe?
- OS (e.g. from /etc/os-release):
> cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
- Kernel (e.g.
uname -a
):
> uname -a
Linux mip-bd-vm218.mip.storage.hpecorp.net 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
- Others: I don’t think this applies in my case?
What happened?
When attempting to run kubeadm token create
when setting up Kubernetes with mTLS, the token is added to the kubeadm token list but is not added to the cluster-info config map.
I attempted to follow these directions, but something goes wrong and this happens:
[root@mip-bd-vm218 ~]> kubeadm token create --print-join-command
W1022 16:33:27.664081 5442 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join mip-bd-vm54.mip.storage.hpecorp.net:10007 --token hm7h66.q9z1tczs5r9wmamu --discovery-token-ca-cert-hash sha256:$hash
# Ok, this looks normal - we got a token and a hash.
[root@mip-bd-vm218 ~]> kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
...
bhkfha.h07irt6fcorvdi1b 1h 2020-10-22T17:45:18-07:00 <none> Proxy for managing TTL for the kubeadm-certs secret <none>
hm7h66.q9z1tczs5r9wmamu 23h 2020-10-23T16:33:27-07:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
...
# Ok, this also looks normal - there's a list of tokens and the token we received from the previous request is in it.
[root@mip-bd-vm218 ~]> kubectl describe cm cluster-info -n kube-public
Name: cluster-info
Namespace: kube-public
Labels: <none>
Annotations: <none>
Data
====
kubeconfig:
----
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: ...
server: https://mip-bd-vm54.mip.storage.hpecorp.net:10007
name: ""
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Events: <none>
# That's a little weird - shouldn't there be at least the token that we just created in this config map?
... Time passes (~20 minutes?) ...
[root@mip-bd-vm218 ~]> curl -k -v -XGET -H "User-Agent: kubeadm/v1.18.6 (linux/amd64) kubernetes/dff82dc" -H "Accept: application/json, */*" 'https://localhost:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s'
...
Dull header info
...
# The json below has been human-formatted for a better reading experience.
{
"apiVersion": "v1",
"data": {
"kubeconfig": "apiVersion: v1\nclusters:\n- cluster:\n certificate-authority-data: --- Dull cert data ---\n server: https://mip-bd-vm54.mip.storage.hpecorp.net:10007\n name: \"\"\ncontexts: null\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers: null\n"
},
"kind": "ConfigMap",
"metadata": {
"creationTimestamp": "2020-10-22T22:45:18Z",
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:kubeconfig": {}
}
},
"manager": "kubeadm",
"operation": "Update",
"time": "2020-10-22T22:45:18Z"
}
],
"name": "cluster-info",
"namespace": "kube-public",
"resourceVersion": "180",
"selfLink": "/api/v1/namespaces/kube-public/configmaps/cluster-info",
"uid": "5bed5b3d-5f9b-4f75-a25d-c8860919f9df"
}
}
* Connection #0 to host localhost left intact
# That's really weird - I gave the system more than enough time to quiesce, so I would think there should be a new key in this configmap by now.
What you expected to happen?
I expected the token to be added to the config-info configmap in a timely fashion.
How to reproduce it (as minimally and precisely as possible)?
This… is a good question. This may have something to do with the certificate signers that I set up (I’ll add them below), but I’m also curious if there are other configurations that could precipitate this behavior. The other things that I’m changing from a valid config are:
- Adding the controllerManager’s
cluster-signing-cert-file
andcluster-signing-key-file
startup params and setting them to the K8s root CA cert and key. - Adding a
client-ca-file
to the apiServer’s startup params and setting it to the K8s root CA. - Adding
enable-bootstrap-token-auth
to the apiServer’s startup params and setting it to"true"
.
YAML files as specified by TLS bootstrap page
# enable bootstrapping nodes to create CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
---
# Approve all CSRs for the group "system:bootstrappers"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
---
# Approve renewal CSRs for the group "system:nodes"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
---
# From here down is a hack endorsed by this github comment:
# https://github.com/kubernetes/kubeadm/issues/668#issuecomment-368708398
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:controller:bootstrap-signer
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- cluster-info
resources:
- configmaps
verbs:
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:controller:bootstrap-signer
namespace: kube-public
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:controller:bootstrap-signer
subjects:
- kind: ServiceAccount
name: bootstrap-signer
namespace: kube-system
Anything else we need to know?
I’m sure there is a lot of other stuff I’m doing wrong here, but these are the bits that I think are important to the problem at hand. I look forward to hearing your responses. Let me know if there is more information necessary to get the ball rolling.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 23 (9 by maintainers)
hello, @distortedsignal let’s close this until we confirm bugs in kubeadm or core k8s. but in any case, please feel free to report the reason for the problem if you find it.
thanks
Ok, so the API server logs (with
-v=5
) looked like this during/after the token creation attempt:Logs
I don’t see any obvious errors/warnings, so I guess that points to the controller manager? I’ll see what I can do about getting those logs.