kubernetes: kubeadm blocks waiting for 'control plane'

Hi @kubernetes/sig-cluster-lifecycle

I tried to follow the docs for kubeadm on centOS 7.1.

It seems that the kubeadm init blocks waiting for ‘control plane to become ready’ even though all containers are running.

# kubeadm init --token foobar.1234
<util/tokens> validating provided token
<master/tokens> accepted provided token
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready

here are the running containers on the same master machine:

$ sudo docker ps
CONTAINER ID        IMAGE                                                           COMMAND                  CREATED             STATUS              PORTS               NAMES
30aff4f98753        gcr.io/google_containers/kube-apiserver-amd64:v1.4.0            "/usr/local/bin/kube-"   3 minutes ago       Up 3 minutes                            k8s_kube-apiserver.c44dda3f_kube-apiserver-k8ss-head_kube-system_6b83c87a9bf5c380c6f948f428b23dd1_408af885
8fd1842776ab        gcr.io/google_containers/kube-controller-manager-amd64:v1.4.0   "/usr/local/bin/kube-"   3 minutes ago       Up 3 minutes                            k8s_kube-controller-manager.a2978680_kube-controller-manager-k8ss-head_kube-system_5f805ed49f6fd9f0640be470e3dea2a2_7ac41d83
32b7bfb55dc0        gcr.io/google_containers/kube-scheduler-amd64:v1.4.0            "/usr/local/bin/kube-"   3 minutes ago       Up 3 minutes                            k8s_kube-scheduler.1b5cde04_kube-scheduler-k8ss-head_kube-system_586d16be4ecaac95b0162c5d11921019_0ca14012
8a1797fdb1df        gcr.io/google_containers/etcd-amd64:2.2.5                       "etcd --listen-client"   8 minutes ago       Up 8 minutes                            k8s_etcd.4ffa9846_etcd-k8ss-head_kube-system_42857e4bd57d261fc438bcb2a87572b9_f1b219d3
292bcafb3316        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD.d8dbe16c_kube-controller-manager-k8ss-head_kube-system_5f805ed49f6fd9f0640be470e3dea2a2_fe9592ab
ab929dd920a2        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD.d8dbe16c_kube-apiserver-k8ss-head_kube-system_6b83c87a9bf5c380c6f948f428b23dd1_c93e3a3b
71c28763aeab        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD.d8dbe16c_kube-scheduler-k8ss-head_kube-system_586d16be4ecaac95b0162c5d11921019_eb12a865
615cb42e0108        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD.d8dbe16c_etcd-k8ss-head_kube-system_42857e4bd57d261fc438bcb2a87572b9_891fc5db

I tried to join a node but I get a connection refused error, even though there is no firewall…

# kubeadm join --token foobar.1234 <master_ip>
<util/tokens> validating provided token
<node/discovery> created cluster info discovery client, requesting info from "http://185.19.30.178:9898/cluster-info/v1/?token-id=foobar"
error: <node/discovery> failed to request cluster info [Get http://MASTER_IP:9898/cluster-info/v1/?token-id=foobar: dial tcp MASTER_IP:9898: getsockopt: connection refused]

and now I am actually wondering if the init is blocking waiting for nodes to join. According to the docs it is not blocking, but the logs of kubeadm seems to indicate that it is.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Reactions: 8
  • Comments: 66 (32 by maintainers)

Commits related to this issue

Most upvoted comments

There is a uninstall script referenced at http://deploy-preview-1321.kubernetes-io-vnext-staging.netlify.com/docs/getting-started-guides/kubeadm/. After running it, my init ran correctly again.

systemctl stop kubelet;
docker rm -f $(docker ps -q); mount | grep "/var/lib/kubelet/*" | awk '{print $3}' | xargs umount 1>/dev/null 2>/dev/null;
rm -rf /var/lib/kubelet /etc/kubernetes /var/lib/etcd /etc/cni;
ip link set cbr0 down; ip link del cbr0;
ip link set cni0 down; ip link del cni0;
systemctl start kubelet

won’t work anymore on ubuntu. After using the updated manual on kubernetes.io the System hangs on the same point:

<master/tokens> generated token: "token"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready

But the apiserver log changed:

I0928 07:01:21.795743       1 handlers.go:162] PATCH /api/v1/namespaces/default/events/ip-10-10-10-10.14786a8fdad24347: (2.145843ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:21.995791       1 handlers.go:162] PATCH /api/v1/namespaces/default/events/ip-10-10-10-10.14786a8fdad4cf01: (2.148172ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:22.195801       1 handlers.go:162] PATCH /api/v1/namespaces/default/events/ip-10-10-10-10.14786a8fdad74c69: (2.210056ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:22.395995       1 handlers.go:162] PATCH /api/v1/namespaces/default/events/ip-10-10-10-10.14786a8fdad24347: (2.291263ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:22.595948       1 handlers.go:162] PATCH /api/v1/namespaces/default/events/ip-10-10-10-10.14786a8fdad4cf01: (2.29452ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:22.795835       1 handlers.go:162] PATCH /api/v1/namespaces/default/events/ip-10-10-10-10.14786a8fdad74c69: (2.200798ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:22.995283       1 handlers.go:162] POST /api/v1/namespaces/kube-system/events: (1.638475ms) 201 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:23.195307       1 handlers.go:162] POST /api/v1/namespaces/kube-system/events: (1.709752ms) 201 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:23.395272       1 handlers.go:162] POST /api/v1/namespaces/kube-system/events: (1.650112ms) 201 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:23.655487       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (807.398µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60274]
I0928 07:01:23.658048       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.948487ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60274]
I0928 07:01:23.710908       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (789.41µs) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60550]
I0928 07:01:23.728159       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (3.595941ms) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60550]
I0928 07:01:24.814084       1 handlers.go:162] GET /api/v1/nodes/ip-10-10-10-10: (1.120047ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:24.822929       1 handlers.go:162] PUT /api/v1/nodes/ip-10-10-10-10/status: (5.303192ms) 200 [[kubelet/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 10.10.10.10:57954]
I0928 07:01:25.660182       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (807.112µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60274]
I0928 07:01:25.662714       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.910111ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60274]
I0928 07:01:25.715300       1 handlers.go:162] GET /api/v1/nodes?resourceVersion=0: (437.108µs) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7/node-controller] 127.0.0.1:60550]
I0928 07:01:25.731729       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (803.973µs) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60550]
I0928 07:01:25.734265       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.914164ms) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60550]
I0928 07:01:27.664747       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (813.713µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60274]
I0928 07:01:27.671610       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (2.416208ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60274]
I0928 07:01:27.736170       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (796.709µs) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60550]
I0928 07:01:27.738650       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-controller-manager: (1.889742ms) 200 [[kube-controller-manager/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:60550]

@harsha544

Be careful the link and script which @kenzhaoyihui provided, that script tries to fake the google’s images with his own images.

You’d better to not run with it.

In fact the solution has been provided in this ticket, I fixed my issue already. It was provided by @benmathews commented on Sep 28, 2016. If yo missed that comment, you should take a try.

@harsha544 https://github.com/kenzhaoyihui/kubeadm-images-gcr.io/blob/master/pull_kubernetes_images.sh

The shell script is to pull the all docker images that needed, could you pull all the images and then execute “kubeadm init”.

I also hit this issues. have disabled SELINUX. <python> [certificates] Valid certificates and keys now exist in “/etc/kubernetes/pki” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/admin.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/kubelet.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/controller-manager.conf” [kubeconfig] Wrote KubeConfig file to disk: “/etc/kubernetes/scheduler.conf” [apiclient] Created API client, waiting for the control plane to become ready </python>

Hi Folk, problem solved for me just by: Stopping Apparmor : # /etc/init.d/apparmor stop after that, you shoud reset kubeadm # kubeadm reset and finally, rerun the Initialization of your master # kubeadm init

@kamigerami you can generate token ownself and provide via attribute kubeadm init --token="${k8s_token}" and use the same for kubeadm join --token=${k8s_token}

ok so the discover port is using a hostPort on 9898.

logs on that pod return this:

$ kubectl logs kube-discovery-1971138125-yry3x --namespace=kube-system
Error from server: Get https://kube-head:10250/containerLogs/kube-system/kube-discovery-1971138125-yry3x/kube-discovery: dial tcp: lookup kube-head on 8.8.8.8:53: no such host

I am following the docs

The DNS pod is not starting:

Events:
  FirstSeen LastSeen    Count   From            SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----            -------------   --------    ------      -------
  27m       27m     1   {default-scheduler }            Normal      Scheduled   Successfully assigned kube-dns-2247936740-igptf to kube-head
  27m       3s      662 {kubelet kube-head}         Warning     FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-2247936740-igptf_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-2247936740-igptf_kube-system(00cf8b74-84c2-11e6-9dfa-061eca000139)\" using network plugins \"cni\": cni config unintialized; Skipping pod"

Some problems on centso7. Also just block on this steps: [apiclient] Created API client, waiting for the control plane to become ready.

docker ps -a This is nothing after this command.

kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

getenforce Permissive(selinux is disabled)

/var/log/messages

1 Dec 29 07:10:29 master kubelet: E1229 07:10:29.744234 8891 pod_workers.go:184] Error syncing pod b4b25cab578f82fd99198c566860faf7, skipping: failed to "StartContainer " for “POD” with ImagePullBackOff: “Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"” 2 Dec 29 07:10:30 master kubelet: E1229 07:10:30.680786 8891 reflector.go:188] pkg/kubelet/config/apiserver.go:44: Failed to list *api.Pod: Get https://192.168.121.241: 6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 3 Dec 29 07:10:30 master kubelet: E1229 07:10:30.680797 8891 reflector.go:188] pkg/kubelet/kubelet.go:386: Failed to list *api.Node: Get https://192.168.121.241:6443/ap i/v1/nodes?fieldSelector=metadata.name%3Dmaster&resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 4 Dec 29 07:10:30 master kubelet: E1229 07:10:30.680835 8891 reflector.go:188] pkg/kubelet/kubelet.go:378: Failed to list *api.Service: Get https://192.168.121.241:6443 /api/v1/services?resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 5 Dec 29 07:10:31 master kubelet: I1229 07:10:31.144650 8891 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach 6 Dec 29 07:10:31 master kubelet: I1229 07:10:31.186977 8891 kubelet_node_status.go:74] Attempting to register node master 7 Dec 29 07:10:31 master kubelet: E1229 07:10:31.187254 8891 kubelet_node_status.go:98] Unable to register node “master” with API server: Post https://192.168.121.241:6 443/api/v1/nodes: dial tcp 192.168.121.241:6443: getsockopt: connection refused 8 Dec 29 07:10:31 master kubelet: I1229 07:10:31.397597 8891 kubelet_node_status.go:204] Setting node annotation to enable volume controller attach/detach 9 Dec 29 07:10:31 master kubelet: E1229 07:10:31.437996 8891 kubelet.go:1508] Failed creating a mirror pod for “kube-apiserver-master_kube-system(73c001656da6c2ae76abb7 d4879d2e36)”: Post https://192.168.121.241:6443/api/v1/namespaces/kube-system/pods: dial tcp 192.168.121.241:6443: getsockopt: connection refused 10 Dec 29 07:10:31 master kubelet: E1229 07:10:31.681357 8891 reflector.go:188] pkg/kubelet/config/apiserver.go:44: Failed to list *api.Pod: Get https://192.168.121.241: 6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 11 Dec 29 07:10:31 master kubelet: E1229 07:10:31.681376 8891 reflector.go:188] pkg/kubelet/kubelet.go:386: Failed to list *api.Node: Get https://192.168.121.241:6443/ap i/v1/nodes?fieldSelector=metadata.name%3Dmaster&resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 12 Dec 29 07:10:31 master kubelet: E1229 07:10:31.681424 8891 reflector.go:188] pkg/kubelet/kubelet.go:378: Failed to list *api.Service: Get https://192.168.121.241:6443 /api/v1/services?resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 13 Dec 29 07:10:31 master docker-current: time=“2016-12-29T07:10:31.738920860Z” level=error msg=“Handler for GET /v1.22/images/gcr.io/google_containers/pause-amd64:3.0/json returned error: No such image: gcr.io/google_containers/pause-amd64:3.0” 14 Dec 29 07:10:31 master kubelet: E1229 07:10:31.739387 8891 docker_manager.go:2188] Failed to create pod infra container: ImagePullBackOff; Skipping pod “kube-apiserve r-master_kube-system(73c001656da6c2ae76abb7d4879d2e36)”: Back-off pulling image “gcr.io/google_containers/pause-amd64:3.0” 15 Dec 29 07:10:31 master kubelet: E1229 07:10:31.739419 8891 pod_workers.go:184] Error syncing pod 73c001656da6c2ae76abb7d4879d2e36, skipping: failed to "StartContainer " for “POD” with ImagePullBackOff: “Back-off pulling image "gcr.io/google_containers/pause-amd64:3.0"” 16 Dec 29 07:10:32 master kubelet: E1229 07:10:32.301850 8891 cni.go:163] error updating cni config: No networks found in /etc/cni/net.d 17 Dec 29 07:10:32 master kubelet: E1229 07:10:32.681920 8891 reflector.go:188] pkg/kubelet/kubelet.go:378: Failed to list *api.Service: Get https://192.168.121.241:6443 /api/v1/services?resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 18 Dec 29 07:10:32 master kubelet: E1229 07:10:32.681920 8891 reflector.go:188] pkg/kubelet/config/apiserver.go:44: Failed to list *api.Pod: Get https://192.168.121.241: 6443/api/v1/pods?fieldSelector=spec.nodeName%3Dmaster&resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 19 Dec 29 07:10:32 master kubelet: E1229 07:10:32.681957 8891 reflector.go:188] pkg/kubelet/kubelet.go:386: Failed to list *api.Node: Get https://192.168.121.241:6443/ap i/v1/nodes?fieldSelector=metadata.name%3Dmaster&resourceVersion=0: dial tcp 192.168.121.241:6443: getsockopt: connection refused 20 Dec 29 07:10:32 master kubelet: E1229 07:10:32.954775 8891 eviction_manager.go:202] eviction manager: unexpected err: failed GetNode: node ‘master’ not found

Could someone give me some help?

@miry Thanks, problem verified for weak access to gcr.io/google_containers/kube*. It’s solved after I load necessary images downloaded independentely on other places.

@miry : I tried your approach, seems to be working the same way in master node but this is the error message on the minion node (to be joined).

kubeadm join --token=p12345.12345p12345p12345 10.144.2.200 <util/tokens> validating provided token <node/discovery> created cluster info discovery client, requesting info from “http://10.144.2.200:9898/cluster-info/v1/?token-id=p12345” error: <node/discovery> failed to parse response as JWS object [square/go-jose: compact JWS format must have three parts]

Any clue how to solve it? Thanks!

my os is centos7,the command-kubeadm can’t start the container of kube-discovery!!!and bolck at the commad:“<master/apiclient> created API client, waiting for the control plane to become ready”. I had setenforce 0,who can help me?oh my god

Actually I just made it work. There is a small difference between CentOS and Ubuntu. In CentOS you have to manually start the kubelet: systemctl enable kubelet && systemctl start kubelet. After that everything worked.

I am still having this issue even after disabling SELinux. I am trying this on EC2 on CentOS 7.2 with ami: ami-6d1c2007. Ubuntu 16.04 worked perfectly.

Are the new pkgs available on: http://yum.kubernetes.io/repos/kubernetes-el7-x86_64 or should I be using a different repo?

It does hang in the same step waiting for 'control plane' but I don’t even get any docker containers running (from docker ps i get 0 containers). Are there any logs that I should look, a quick search didn’t showed anything.

Any help is appreciated 😃

Same issue for me on an aws installation, but I can’t see any docker containers running. Some Informations are here: Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-38-generic x86_64) Using http_proxy and https_proxy

export https_proxy=http://<proxy>:<port>
export http_proxy=http://<proxy>:<port>
kubeadm init --cloud-provider aws

Looked at the logs of the apiServer. It returns with an exception:

I0927 11:44:47.425374       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (793.43µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:47.427858       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.682203ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:47.606685       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57328: remote error: bad certificate
I0927 11:44:47.722809       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57330: remote error: bad certificate
I0927 11:44:47.728099       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57332: remote error: bad certificate
I0927 11:44:48.251368       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57334: remote error: bad certificate
I0927 11:44:48.256871       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57336: remote error: bad certificate
I0927 11:44:48.262479       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57338: remote error: bad certificate
I0927 11:44:48.267460       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57340: remote error: bad certificate
I0927 11:44:48.608406       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57342: remote error: bad certificate
I0927 11:44:48.724428       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57344: remote error: bad certificate
I0927 11:44:48.729680       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57346: remote error: bad certificate
I0927 11:44:48.777612       1 handlers.go:162] GET /healthz: (39.187µs) 200 [[Go-http-client/1.1] 127.0.0.1:49808]
I0927 11:44:49.429761       1 handlers.go:162] G10.10.10.10ET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (762.498µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:49.432267       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (2.070905ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:49.614084       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57354: remote error: bad certificate
I0927 11:44:49.727405       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57356: remote error: bad certificate
I0927 11:44:49.732888       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57358: remote error: bad certificate
I0927 11:44:50.080279       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57360: remote error: bad certificate
I0927 11:44:50.085570       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57362: remote error: bad certificate
I0927 11:44:50.617384       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57364: remote error: bad certificate
I0927 11:44:50.730144       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57366: remote error: bad certificate
I0927 11:44:50.735525       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57368: remote error: bad certificate
I0927 11:44:51.433824       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (769.066µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:51.436359       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.713977ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:51.620964       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57370: remote error: bad certificate
I0927 11:44:51.731724       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57372: remote error: bad certificate
I0927 11:44:51.761983       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57374: remote error: bad certificate
I0927 11:44:52.622487       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57376: remote error: bad certificate
I0927 11:44:52.732927       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57378: remote error: bad certificate
I0927 11:44:52.762908       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57380: remote error: bad certificate
I0927 11:44:53.438270       1 handlers.go:162] GET /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (805.346µs) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:53.440909       1 handlers.go:162] PUT /api/v1/namespaces/kube-system/endpoints/kube-scheduler: (1.82773ms) 200 [[kube-scheduler/v1.4.0 (linux/amd64) kubernetes/a16c0a7] 127.0.0.1:46848]
I0927 11:44:53.627293       1 logs.go:41] http: TLS handshake error from 10.10.10.10:57382: remote error: bad certificate