kubeadm: Token not being added to configmap after kubeadm token create

What keywords did you search in kubeadm issues before filing this one?

Through Google, I found ticket #668. This issue is somewhat similar to #668, though I believe that there may be a different root-cause. I do not know the root cause of that issue, so I am opening a narrower issue. This also seems similar to 1988, though all the machines I am using are persistent.

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

Versions

kubeadm version (use kubeadm version):

> kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
> kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:51:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration: VM - assume a standard x86 host? Maybe?
  • OS (e.g. from /etc/os-release):
> cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • Kernel (e.g. uname -a):
> uname -a
Linux mip-bd-vm218.mip.storage.hpecorp.net 3.10.0-1127.19.1.el7.x86_64 #1 SMP Tue Aug 25 17:23:54 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • Others: I don’t think this applies in my case?

What happened?

When attempting to run kubeadm token create when setting up Kubernetes with mTLS, the token is added to the kubeadm token list but is not added to the cluster-info config map.

I attempted to follow these directions, but something goes wrong and this happens:

[root@mip-bd-vm218 ~]> kubeadm token create --print-join-command
W1022 16:33:27.664081    5442 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join mip-bd-vm54.mip.storage.hpecorp.net:10007 --token hm7h66.q9z1tczs5r9wmamu     --discovery-token-ca-cert-hash sha256:$hash
# Ok, this looks normal - we got a token and a hash.
[root@mip-bd-vm218 ~]> kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
...
bhkfha.h07irt6fcorvdi1b   1h          2020-10-22T17:45:18-07:00   <none>                   Proxy for managing TTL for the kubeadm-certs secret        <none>
hm7h66.q9z1tczs5r9wmamu   23h         2020-10-23T16:33:27-07:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
...
# Ok, this also looks normal - there's a list of tokens and the token we received from the previous request is in it.
[root@mip-bd-vm218 ~]> kubectl describe cm cluster-info -n kube-public
Name:         cluster-info
Namespace:    kube-public
Labels:       <none>
Annotations:  <none>

Data
====
kubeconfig:
----
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: https://mip-bd-vm54.mip.storage.hpecorp.net:10007
  name: ""
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

Events:  <none>
# That's a little weird - shouldn't there be at least the token that we just created in this config map?
... Time passes (~20 minutes?) ...
[root@mip-bd-vm218 ~]> curl -k -v -XGET  -H "User-Agent: kubeadm/v1.18.6 (linux/amd64) kubernetes/dff82dc" -H "Accept: application/json, */*" 'https://localhost:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s'
...
Dull header info
...
# The json below has been human-formatted for a better reading experience.
{
    "apiVersion": "v1",
    "data": {
        "kubeconfig": "apiVersion: v1\nclusters:\n- cluster:\n    certificate-authority-data: --- Dull cert data ---\n    server: https://mip-bd-vm54.mip.storage.hpecorp.net:10007\n  name: \"\"\ncontexts: null\ncurrent-context: \"\"\nkind: Config\npreferences: {}\nusers: null\n"
    },
    "kind": "ConfigMap",
    "metadata": {
        "creationTimestamp": "2020-10-22T22:45:18Z",
        "managedFields": [
            {
                "apiVersion": "v1",
                "fieldsType": "FieldsV1",
                "fieldsV1": {
                    "f:data": {
                        ".": {},
                        "f:kubeconfig": {}
                    }
                },
                "manager": "kubeadm",
                "operation": "Update",
                "time": "2020-10-22T22:45:18Z"
            }
        ],
        "name": "cluster-info",
        "namespace": "kube-public",
        "resourceVersion": "180",
        "selfLink": "/api/v1/namespaces/kube-public/configmaps/cluster-info",
        "uid": "5bed5b3d-5f9b-4f75-a25d-c8860919f9df"
    }
}
* Connection #0 to host localhost left intact
# That's really weird - I gave the system more than enough time to quiesce, so I would think there should be a new key in this configmap by now.

What you expected to happen?

I expected the token to be added to the config-info configmap in a timely fashion.

How to reproduce it (as minimally and precisely as possible)?

This… is a good question. This may have something to do with the certificate signers that I set up (I’ll add them below), but I’m also curious if there are other configurations that could precipitate this behavior. The other things that I’m changing from a valid config are:

  • Adding the controllerManager’s cluster-signing-cert-file and cluster-signing-key-file startup params and setting them to the K8s root CA cert and key.
  • Adding a client-ca-file to the apiServer’s startup params and setting it to the K8s root CA.
  • Adding enable-bootstrap-token-auth to the apiServer’s startup params and setting it to "true".
YAML files as specified by TLS bootstrap page
# enable bootstrapping nodes to create CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: create-csrs-for-bootstrapping
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:node-bootstrapper
  apiGroup: rbac.authorization.k8s.io
---
# Approve all CSRs for the group "system:bootstrappers"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: auto-approve-csrs-for-group
subjects:
- kind: Group
  name: system:bootstrappers
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
  apiGroup: rbac.authorization.k8s.io
---
# Approve renewal CSRs for the group "system:nodes"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
  name: system:nodes
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
  apiGroup: rbac.authorization.k8s.io
---
# From here down is a hack endorsed by this github comment:
# https://github.com/kubernetes/kubeadm/issues/668#issuecomment-368708398
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:controller:bootstrap-signer
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resourceNames:
  - cluster-info
  resources:
  - configmaps
  verbs:
  - update
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:controller:bootstrap-signer
  namespace: kube-public
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:controller:bootstrap-signer
subjects:
- kind: ServiceAccount
  name: bootstrap-signer
  namespace: kube-system

Anything else we need to know?

I’m sure there is a lot of other stuff I’m doing wrong here, but these are the bits that I think are important to the problem at hand. I look forward to hearing your responses. Let me know if there is more information necessary to get the ball rolling.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 23 (9 by maintainers)

Most upvoted comments

hello, @distortedsignal let’s close this until we confirm bugs in kubeadm or core k8s. but in any case, please feel free to report the reason for the problem if you find it.

thanks

Ok, so the API server logs (with -v=5) looked like this during/after the token creation attempt:

Logs
I1026 17:45:24.105755       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/secrets/bootstrap-token-ef8bh4" satisfied by nonGoRestful
I1026 17:45:24.105783       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/secrets/bootstrap-token-ef8bh4" satisfied by prefix /api/
I1026 17:45:24.105793       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/secrets/bootstrap-token-ef8bh4" satisfied by gorestful with webservice /api/v1
I1026 17:45:24.107370       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/secrets/bootstrap-token-ef8bh4?timeout=10s" latency=1.989409ms resp=404 UserAgent="kubeadm/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="16.143.20.151:58274":
I1026 17:45:24.110140       1 handler.go:153] kube-aggregator: POST "/api/v1/namespaces/kube-system/secrets" satisfied by nonGoRestful
I1026 17:45:24.110161       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/secrets" satisfied by prefix /api/
I1026 17:45:24.110172       1 handler.go:143] kube-apiserver: POST "/api/v1/namespaces/kube-system/secrets" satisfied by gorestful with webservice /api/v1
I1026 17:45:24.111400       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/resourcequotas" satisfied by nonGoRestful
I1026 17:45:24.111424       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/resourcequotas" satisfied by prefix /api/
I1026 17:45:24.111439       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/resourcequotas" satisfied by gorestful with webservice /api/v1
I1026 17:45:24.112773       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/resourcequotas" latency=1.520484ms resp=200 UserAgent="kube-apiserver/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="[::1]:35816":
I1026 17:45:24.114915       1 httplog.go:90] verb="POST" URI="/api/v1/namespaces/kube-system/secrets?timeout=10s" latency=5.154232ms resp=201 UserAgent="kubeadm/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="16.143.20.151:58274":
I1026 17:45:24.188151       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-public/configmaps/cluster-info" satisfied by nonGoRestful
I1026 17:45:24.188168       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-public/configmaps/cluster-info" satisfied by prefix /api/
I1026 17:45:24.188176       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-public/configmaps/cluster-info" satisfied by gorestful with webservice /api/v1
I1026 17:45:24.189272       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public/configmaps/cluster-info" latency=1.345109ms resp=200 UserAgent="kubectl/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="16.143.20.151:58280":
I1026 17:45:24.190847       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-public/configmaps/cluster-info" satisfied by nonGoRestful
I1026 17:45:24.190859       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-public/configmaps/cluster-info" satisfied by prefix /api/
I1026 17:45:24.190867       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-public/configmaps/cluster-info" satisfied by gorestful with webservice /api/v1
I1026 17:45:24.191989       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public/configmaps/cluster-info" latency=1.324629ms resp=200 UserAgent="kubectl/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="16.143.20.151:58280":
I1026 17:45:24.193406       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-public/events" satisfied by nonGoRestful
I1026 17:45:24.193427       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-public/events" satisfied by prefix /api/
I1026 17:45:24.193435       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-public/events" satisfied by gorestful with webservice /api/v1
I1026 17:45:24.194703       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-public/events?fieldSelector=involvedObject.name%3Dcluster-info%2CinvolvedObject.namespace%3Dkube-public%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3Db90c2fa0-8f7c-4731-9181-a5e09d73f5c5" latency=1.520955ms resp=200 UserAgent="kubectl/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="16.143.20.151:58280":
I1026 17:45:24.385775       1 handler.go:153] apiextensions-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:24.385825       1 pathrecorder.go:240] apiextensions-apiserver: "/openapi/v2" satisfied by exact match
I1026 17:45:24.801632       1 handler.go:153] kube-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:24.801684       1 pathrecorder.go:240] kube-apiserver: "/openapi/v2" satisfied by exact match
I1026 17:45:25.195449       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:25.195490       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by prefix /api/
I1026 17:45:25.195509       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by gorestful with webservice /api/v1
I1026 17:45:25.197629       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s" latency=2.775583ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:25.199796       1 handler.go:153] kube-aggregator: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:25.199838       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:25.199858       1 handler.go:143] kube-apiserver: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:25.201721       1 httplog.go:90] verb="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" latency=2.462251ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:25.203859       1 handler.go:153] kube-aggregator: PUT "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:25.203897       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by prefix /api/
I1026 17:45:25.203914       1 handler.go:143] kube-apiserver: PUT "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by gorestful with webservice /api/v1
I1026 17:45:25.207015       1 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s" latency=3.689454ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:25.208420       1 handler.go:153] kube-aggregator: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:25.208436       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:25.208445       1 handler.go:143] kube-apiserver: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:25.209566       1 httplog.go:90] verb="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" latency=1.358647ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:25.210927       1 handler.go:153] kube-aggregator: PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:25.210940       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:25.210949       1 handler.go:143] kube-apiserver: PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:25.213085       1 httplog.go:90] verb="PUT" URI="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" latency=2.366619ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:25.386327       1 handler.go:153] apiextensions-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:25.386373       1 pathrecorder.go:240] apiextensions-apiserver: "/openapi/v2" satisfied by exact match
I1026 17:45:25.462090       1 httplog.go:90] verb="GET" URI="/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=309631&timeout=7m48s&timeoutSeconds=468&watch=true" latency=7m48.001188156s resp=0 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/scheduler" srcIP="16.143.20.151:58386":
I1026 17:45:25.464750       1 handler.go:153] kube-aggregator: GET "/api/v1/nodes" satisfied by nonGoRestful
I1026 17:45:25.464789       1 pathrecorder.go:247] kube-aggregator: "/api/v1/nodes" satisfied by prefix /api/
I1026 17:45:25.464825       1 handler.go:143] kube-apiserver: GET "/api/v1/nodes" satisfied by gorestful with webservice /api/v1
I1026 17:45:25.465029       1 get.go:251] Starting watch for /api/v1/nodes, rv=310043 labels= fields= timeout=6m25s
I1026 17:45:25.802013       1 handler.go:153] kube-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:25.802067       1 pathrecorder.go:240] kube-apiserver: "/openapi/v2" satisfied by exact match
I1026 17:45:26.386753       1 handler.go:153] apiextensions-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:26.386801       1 pathrecorder.go:240] apiextensions-apiserver: "/openapi/v2" satisfied by exact match
I1026 17:45:26.802423       1 handler.go:153] kube-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:26.802467       1 pathrecorder.go:240] kube-apiserver: "/openapi/v2" satisfied by exact match
I1026 17:45:26.922519       1 handler.go:153] kube-aggregator: PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mip-bd-vm218.mip.storage.hpecorp.net" satisfied by nonGoRestful
I1026 17:45:26.922562       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mip-bd-vm218.mip.storage.hpecorp.net" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:26.922575       1 handler.go:143] kube-apiserver: PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mip-bd-vm218.mip.storage.hpecorp.net" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:26.924824       1 httplog.go:90] verb="PUT" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/mip-bd-vm218.mip.storage.hpecorp.net?timeout=10s" latency=2.660967ms resp=200 UserAgent="kubelet/v1.18.6 (linux/amd64) kubernetes/dff82dc" srcIP="16.143.20.151:58408":
I1026 17:45:27.215908       1 handler.go:153] kube-aggregator: GET "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:27.215952       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by prefix /api/
I1026 17:45:27.215970       1 handler.go:143] kube-apiserver: GET "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by gorestful with webservice /api/v1
I1026 17:45:27.218343       1 httplog.go:90] verb="GET" URI="/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s" latency=3.080844ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:27.220511       1 handler.go:153] kube-aggregator: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:27.220555       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:27.220575       1 handler.go:143] kube-apiserver: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:27.222268       1 httplog.go:90] verb="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" latency=2.280099ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:27.224249       1 handler.go:153] kube-aggregator: PUT "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:27.224277       1 pathrecorder.go:247] kube-aggregator: "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by prefix /api/
I1026 17:45:27.224290       1 handler.go:143] kube-apiserver: PUT "/api/v1/namespaces/kube-system/endpoints/kube-scheduler" satisfied by gorestful with webservice /api/v1
I1026 17:45:27.227304       1 httplog.go:90] verb="PUT" URI="/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s" latency=3.456775ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:27.229136       1 handler.go:153] kube-aggregator: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:27.229163       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:27.229177       1 handler.go:143] kube-apiserver: GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:27.230357       1 httplog.go:90] verb="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" latency=1.591831ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:27.231878       1 handler.go:153] kube-aggregator: PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by nonGoRestful
I1026 17:45:27.231904       1 pathrecorder.go:247] kube-aggregator: "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by prefix /apis/coordination.k8s.io/v1/
I1026 17:45:27.231918       1 handler.go:143] kube-apiserver: PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler" satisfied by gorestful with webservice /apis/coordination.k8s.io/v1
I1026 17:45:27.233651       1 httplog.go:90] verb="PUT" URI="/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=10s" latency=2.137225ms resp=200 UserAgent="kube-scheduler/v1.18.6 (linux/amd64) kubernetes/dff82dc/leader-election" srcIP="16.143.20.151:58386":
I1026 17:45:27.387277       1 handler.go:153] apiextensions-apiserver: GET "/openapi/v2" satisfied by nonGoRestful
I1026 17:45:27.387328       1 pathrecorder.go:240] apiextensions-apiserver: "/openapi/v2" satisfied by exact match

I don’t see any obvious errors/warnings, so I guess that points to the controller manager? I’ll see what I can do about getting those logs.