kubeadm: KubeADM HA setup with Nginx TCP forwarding cannot success

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.2”, GitCommit:“bb9ffb1654d4a729bb4cec18ff088eacc153c239”, GitTreeState:“clean”, BuildDate:“2018-08-07T23:14:39Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}

Environment:

  • Kubernetes version (use kubectl version): Client Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.2”, GitCommit:“bb9ffb1654d4a729bb4cec18ff088eacc153c239”, GitTreeState:“clean”, BuildDate:“2018-08-07T23:17:28Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration: AWS T2 Medium

  • OS (e.g. from /etc/os-release): Ubuntu 16.04

  • Kernel (e.g. uname -a): Linux ip-172-31-26-63 4.4.0-1065-aws #75-Ubuntu SMP Fri Aug 10 11:14:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Others: Nginx version: nginx/1.10.3 (Ubuntu)

What happened?

I was trying to setup K8s HA (e.g. multi master) deployment following this guide, using Nginx as a TCP forwarder. The “init” phase is always fail with error:

Unfortunately, an error has occurred: timed out waiting for the condition

What you expected to happen?

Calling “kubeadm init --config kubeadm-config.yaml” will succeed.

How to reproduce it (as minimally and precisely as possible)?

In my setup, I have EC2 machine with Nginx (e.g. load balancer) and another EC2 machine as K8s master. Both machines located in the same subnet and all traffic is allowed in security rules.

Load Balancer Machine:

  • Private DNS: ip-172-31-29-249.eu-central-1.compute.internal
  • Private IPs: 172.31.29.249

Master Machine:

  • Private DNS: ip-172-31-26-63.eu-central-1.compute.internal
  • Private IPs: 172.31.26.63

Nginx nginx.conf

events {
  worker_connections 4096; ## Default: 1024
}

stream {
  server {
    listen 6443;
    proxy_pass stream_backend;
  }

  upstream stream_backend {
    server ip-172-31-26-63.eu-central-1.compute.internal:6443;
  }
}

kubeadm-config.yaml

apiVersion: kubeadm.k8s.io/v1alpha2
kind: MasterConfiguration
kubernetesVersion: v1.11.2
apiServerCertSANs:
- "ip-172-31-29-249.eu-central-1.compute.internal"
api:
    controlPlaneEndpoint: "ip-172-31-29-249.eu-central-1.compute.internal:6443"
etcd:
  local:
    extraArgs:
      listen-client-urls: "https://127.0.0.1:2379,https://172.31.26.63:2379"
      advertise-client-urls: "https://172.31.26.63:2379"
      listen-peer-urls: "https://172.31.26.63:2380"
      initial-advertise-peer-urls: "https://172.31.26.63:2380"
      initial-cluster: "ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380"
    serverCertSANs:
      - ip-172-31-26-63.eu-central-1.compute.internal
      - 172.31.26.63
    peerCertSANs:
      - ip-172-31-26-63.eu-central-1.compute.internal
      - 172.31.26.63
networking:
    # This CIDR is a Calico default. Substitute or remove for your CNI provider.
    podSubnet: "192.168.0.0/16"

Anything else we need to know?

Journal logs from Master machine:

Aug 29 06:44:46 ip-172-31-26-63 kubelet[26234]: I0829 06:44:46.760986   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: I0829 06:44:47.063559   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: I0829 06:44:47.063673   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: I0829 06:44:47.063762   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: E0829 06:44:47.063790   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: E0829 06:44:47.263258   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: E0829 06:44:47.264094   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:44:47 ip-172-31-26-63 kubelet[26234]: W0829 06:44:47.264193   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: EOF
Aug 29 06:44:48 ip-172-31-26-63 kubelet[26234]: W0829 06:44:48.949638   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:44:48 ip-172-31-26-63 kubelet[26234]: E0829 06:44:48.949759   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:44:49 ip-172-31-26-63 kubelet[26234]: I0829 06:44:49.761019   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:44:51 ip-172-31-26-63 kubelet[26234]: I0829 06:44:51.983993   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:44:51 ip-172-31-26-63 kubelet[26234]: I0829 06:44:51.986548   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:44:51 ip-172-31-26-63 kubelet[26234]: E0829 06:44:51.988139   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:57950->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: E0829 06:44:53.076919   26234 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: read tcp 172.31.26.63:57986->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: E0829 06:44:53.275226   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: E0829 06:44:53.275616   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: I0829 06:44:53.760944   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: E0829 06:44:53.917595   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: W0829 06:44:53.950649   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:44:53 ip-172-31-26-63 kubelet[26234]: E0829 06:44:53.950797   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:44:54 ip-172-31-26-63 kubelet[26234]: I0829 06:44:54.064533   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:44:54 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:44:54 ip-172-31-26-63 kubelet[26234]: I0829 06:44:54.065631   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:44:54 ip-172-31-26-63 kubelet[26234]: I0829 06:44:54.065773   26234 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:44:54 ip-172-31-26-63 kubelet[26234]: E0829 06:44:54.065840   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:44:55 ip-172-31-26-63 kubelet[26234]: E0829 06:44:55.357668   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:58080->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:44:56 ip-172-31-26-63 kubelet[26234]: W0829 06:44:56.284729   26234 status_manager.go:482] Failed to get status for pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)": an error on the server ("") has prevented the request from succeeding (get pods etcd-ip-172-31-26-63)
Aug 29 06:44:57 ip-172-31-26-63 kubelet[26234]: E0829 06:44:57.304466   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
Aug 29 06:44:57 ip-172-31-26-63 kubelet[26234]: I0829 06:44:57.761061   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:44:58 ip-172-31-26-63 kubelet[26234]: W0829 06:44:58.951829   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:44:58 ip-172-31-26-63 kubelet[26234]: E0829 06:44:58.951960   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:44:58 ip-172-31-26-63 kubelet[26234]: I0829 06:44:58.988396   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:44:58 ip-172-31-26-63 kubelet[26234]: I0829 06:44:58.990867   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:44:58 ip-172-31-26-63 kubelet[26234]: E0829 06:44:58.992662   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:58186->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:00 ip-172-31-26-63 kubelet[26234]: I0829 06:45:00.761037   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:01 ip-172-31-26-63 kubelet[26234]: I0829 06:45:01.063781   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:01 ip-172-31-26-63 kubelet[26234]: I0829 06:45:01.063888   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:01 ip-172-31-26-63 kubelet[26234]: I0829 06:45:01.063978   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:45:01 ip-172-31-26-63 kubelet[26234]: E0829 06:45:01.064007   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:03 ip-172-31-26-63 kubelet[26234]: E0829 06:45:03.318759   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
Aug 29 06:45:03 ip-172-31-26-63 kubelet[26234]: E0829 06:45:03.318887   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
Aug 29 06:45:03 ip-172-31-26-63 kubelet[26234]: E0829 06:45:03.917873   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:45:03 ip-172-31-26-63 kubelet[26234]: W0829 06:45:03.952936   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:03 ip-172-31-26-63 kubelet[26234]: E0829 06:45:03.953143   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:05 ip-172-31-26-63 kubelet[26234]: W0829 06:45:05.358691   26234 status_manager.go:482] Failed to get status for pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)": an error on the server ("") has prevented the request from succeeding (get pods etcd-ip-172-31-26-63)
Aug 29 06:45:05 ip-172-31-26-63 kubelet[26234]: E0829 06:45:05.360031   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:58408->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:45:05 ip-172-31-26-63 kubelet[26234]: I0829 06:45:05.761004   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:05 ip-172-31-26-63 kubelet[26234]: I0829 06:45:05.992856   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:05 ip-172-31-26-63 kubelet[26234]: I0829 06:45:05.995205   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:05 ip-172-31-26-63 kubelet[26234]: E0829 06:45:05.996958   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:58416->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:06 ip-172-31-26-63 kubelet[26234]: I0829 06:45:06.067997   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:45:06 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:06 ip-172-31-26-63 kubelet[26234]: I0829 06:45:06.068106   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:06 ip-172-31-26-63 kubelet[26234]: I0829 06:45:06.068225   26234 kuberuntime_manager.go:767] Back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:45:06 ip-172-31-26-63 kubelet[26234]: E0829 06:45:06.068254   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:07 ip-172-31-26-63 kubelet[26234]: W0829 06:45:07.363831   26234 status_manager.go:482] Failed to get status for pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/etcd-ip-172-31-26-63: EOF
Aug 29 06:45:07 ip-172-31-26-63 kubelet[26234]: E0829 06:45:07.363999   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:07 ip-172-31-26-63 kubelet[26234]: E0829 06:45:07.364227   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:08 ip-172-31-26-63 kubelet[26234]: E0829 06:45:08.366183   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:08 ip-172-31-26-63 kubelet[26234]: E0829 06:45:08.366252   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:08 ip-172-31-26-63 kubelet[26234]: W0829 06:45:08.954150   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:08 ip-172-31-26-63 kubelet[26234]: E0829 06:45:08.954254   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:09 ip-172-31-26-63 kubelet[26234]: E0829 06:45:09.371172   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:11 ip-172-31-26-63 kubelet[26234]: E0829 06:45:11.376574   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:11 ip-172-31-26-63 kubelet[26234]: W0829 06:45:11.377328   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: EOF
Aug 29 06:45:12 ip-172-31-26-63 kubelet[26234]: E0829 06:45:12.380272   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:12 ip-172-31-26-63 kubelet[26234]: E0829 06:45:12.380345   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:12 ip-172-31-26-63 kubelet[26234]: I0829 06:45:12.761020   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:12 ip-172-31-26-63 kubelet[26234]: I0829 06:45:12.997799   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: I0829 06:45:13.000223   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: E0829 06:45:13.001907   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:58648->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: I0829 06:45:13.063689   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: I0829 06:45:13.063781   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: I0829 06:45:13.063869   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: E0829 06:45:13.063899   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: E0829 06:45:13.382402   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: E0829 06:45:13.382898   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: E0829 06:45:13.918177   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: W0829 06:45:13.955279   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:13 ip-172-31-26-63 kubelet[26234]: E0829 06:45:13.955402   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:15 ip-172-31-26-63 kubelet[26234]: E0829 06:45:15.362403   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:58714->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:45:16 ip-172-31-26-63 kubelet[26234]: E0829 06:45:16.388998   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:18 ip-172-31-26-63 kubelet[26234]: E0829 06:45:18.393068   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:18 ip-172-31-26-63 kubelet[26234]: W0829 06:45:18.393351   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: EOF
Aug 29 06:45:18 ip-172-31-26-63 kubelet[26234]: E0829 06:45:18.393772   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:18 ip-172-31-26-63 kubelet[26234]: E0829 06:45:18.393992   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:18 ip-172-31-26-63 kubelet[26234]: W0829 06:45:18.956408   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:18 ip-172-31-26-63 kubelet[26234]: E0829 06:45:18.956541   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:20 ip-172-31-26-63 kubelet[26234]: I0829 06:45:20.002129   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:20 ip-172-31-26-63 kubelet[26234]: I0829 06:45:20.005505   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:20 ip-172-31-26-63 kubelet[26234]: E0829 06:45:20.007449   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:58876->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:21 ip-172-31-26-63 kubelet[26234]: I0829 06:45:21.761061   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:22 ip-172-31-26-63 kubelet[26234]: I0829 06:45:22.098895   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:45:22 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:22 ip-172-31-26-63 kubelet[26234]: I0829 06:45:22.098997   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:22 ip-172-31-26-63 kubelet[26234]: I0829 06:45:22.469428   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:23 ip-172-31-26-63 kubelet[26234]: I0829 06:45:23.760970   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:23 ip-172-31-26-63 kubelet[26234]: E0829 06:45:23.918445   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:45:23 ip-172-31-26-63 kubelet[26234]: W0829 06:45:23.957509   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:23 ip-172-31-26-63 kubelet[26234]: E0829 06:45:23.957619   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.063543   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.063651   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.480444   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.480465   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.783303   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.783407   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: I0829 06:45:24.783493   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:45:24 ip-172-31-26-63 kubelet[26234]: E0829 06:45:24.783520   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:26 ip-172-31-26-63 kubelet[26234]: I0829 06:45:26.413815   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:26 ip-172-31-26-63 kubelet[26234]: I0829 06:45:26.716734   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:26 ip-172-31-26-63 kubelet[26234]: I0829 06:45:26.716848   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:26 ip-172-31-26-63 kubelet[26234]: I0829 06:45:26.716935   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:45:26 ip-172-31-26-63 kubelet[26234]: E0829 06:45:26.716963   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:27 ip-172-31-26-63 kubelet[26234]: I0829 06:45:27.007667   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:27 ip-172-31-26-63 kubelet[26234]: I0829 06:45:27.010282   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:28 ip-172-31-26-63 kubelet[26234]: W0829 06:45:28.958615   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:28 ip-172-31-26-63 kubelet[26234]: E0829 06:45:28.958726   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:32 ip-172-31-26-63 kubelet[26234]: E0829 06:45:32.401392   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: net/http: TLS handshake timeout
Aug 29 06:45:32 ip-172-31-26-63 kubelet[26234]: E0829 06:45:32.401360   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Aug 29 06:45:32 ip-172-31-26-63 kubelet[26234]: E0829 06:45:32.417108   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: net/http: TLS handshake timeout
Aug 29 06:45:32 ip-172-31-26-63 kubelet[26234]: W0829 06:45:32.417105   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: net/http: TLS handshake timeout
Aug 29 06:45:33 ip-172-31-26-63 kubelet[26234]: E0829 06:45:33.918718   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:45:33 ip-172-31-26-63 kubelet[26234]: W0829 06:45:33.959635   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:33 ip-172-31-26-63 kubelet[26234]: E0829 06:45:33.959739   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:35 ip-172-31-26-63 kubelet[26234]: E0829 06:45:35.078210   26234 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: net/http: TLS handshake timeout
Aug 29 06:45:35 ip-172-31-26-63 kubelet[26234]: E0829 06:45:35.364175   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: net/http: TLS handshake timeout' (may retry after sleeping)
Aug 29 06:45:37 ip-172-31-26-63 kubelet[26234]: E0829 06:45:37.011741   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: net/http: TLS handshake timeout
Aug 29 06:45:38 ip-172-31-26-63 kubelet[26234]: W0829 06:45:38.960736   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:38 ip-172-31-26-63 kubelet[26234]: E0829 06:45:38.960856   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:40 ip-172-31-26-63 kubelet[26234]: I0829 06:45:40.760982   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:41 ip-172-31-26-63 kubelet[26234]: I0829 06:45:41.064733   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:41 ip-172-31-26-63 kubelet[26234]: I0829 06:45:41.064862   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:41 ip-172-31-26-63 kubelet[26234]: I0829 06:45:41.064964   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:45:41 ip-172-31-26-63 kubelet[26234]: E0829 06:45:41.064993   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:42 ip-172-31-26-63 kubelet[26234]: W0829 06:45:42.418624   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: net/http: TLS handshake timeout
Aug 29 06:45:42 ip-172-31-26-63 kubelet[26234]: W0829 06:45:42.918462   26234 status_manager.go:482] Failed to get status for pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/etcd-ip-172-31-26-63: EOF
Aug 29 06:45:42 ip-172-31-26-63 kubelet[26234]: E0829 06:45:42.918961   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:42 ip-172-31-26-63 kubelet[26234]: E0829 06:45:42.919276   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:42 ip-172-31-26-63 kubelet[26234]: E0829 06:45:42.919565   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: I0829 06:45:43.542478   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: I0829 06:45:43.542671   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: I0829 06:45:43.844555   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: I0829 06:45:43.844636   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: I0829 06:45:43.844751   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: E0829 06:45:43.844779   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: E0829 06:45:43.918958   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: W0829 06:45:43.961785   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:43 ip-172-31-26-63 kubelet[26234]: E0829 06:45:43.961938   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:44 ip-172-31-26-63 kubelet[26234]: I0829 06:45:44.011965   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:44 ip-172-31-26-63 kubelet[26234]: I0829 06:45:44.014318   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:44 ip-172-31-26-63 kubelet[26234]: E0829 06:45:44.016018   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:59072->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:45 ip-172-31-26-63 kubelet[26234]: E0829 06:45:45.366421   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:59106->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: I0829 06:45:48.063770   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: I0829 06:45:48.366560   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: I0829 06:45:48.366669   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: I0829 06:45:48.366774   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: E0829 06:45:48.366803   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: W0829 06:45:48.962959   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:48 ip-172-31-26-63 kubelet[26234]: E0829 06:45:48.963125   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:49 ip-172-31-26-63 kubelet[26234]: E0829 06:45:49.932356   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:50 ip-172-31-26-63 kubelet[26234]: E0829 06:45:50.934859   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:51 ip-172-31-26-63 kubelet[26234]: I0829 06:45:51.016246   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:51 ip-172-31-26-63 kubelet[26234]: I0829 06:45:51.018443   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:51 ip-172-31-26-63 kubelet[26234]: E0829 06:45:51.020170   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:59298->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:51 ip-172-31-26-63 kubelet[26234]: E0829 06:45:51.936694   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:51 ip-172-31-26-63 kubelet[26234]: W0829 06:45:51.937573   26234 status_manager.go:482] Failed to get status for pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)": an error on the server ("") has prevented the request from succeeding (get pods etcd-ip-172-31-26-63)
Aug 29 06:45:52 ip-172-31-26-63 kubelet[26234]: E0829 06:45:52.938650   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:52 ip-172-31-26-63 kubelet[26234]: E0829 06:45:52.938739   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:52 ip-172-31-26-63 kubelet[26234]: E0829 06:45:52.940135   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: I0829 06:45:53.761015   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: I0829 06:45:53.761015   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: E0829 06:45:53.919247   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: E0829 06:45:53.941463   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: E0829 06:45:53.942652   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: W0829 06:45:53.964016   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:53 ip-172-31-26-63 kubelet[26234]: E0829 06:45:53.964111   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:45:54 ip-172-31-26-63 kubelet[26234]: I0829 06:45:54.063923   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:45:54 ip-172-31-26-63 kubelet[26234]: I0829 06:45:54.064011   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:54 ip-172-31-26-63 kubelet[26234]: I0829 06:45:54.064089   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:45:54 ip-172-31-26-63 kubelet[26234]: E0829 06:45:54.064115   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:45:55 ip-172-31-26-63 kubelet[26234]: E0829 06:45:55.368591   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:59434->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:45:56 ip-172-31-26-63 kubelet[26234]: E0829 06:45:56.946504   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:57 ip-172-31-26-63 kubelet[26234]: E0829 06:45:57.079338   26234 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: read tcp 172.31.26.63:59498->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:57 ip-172-31-26-63 kubelet[26234]: E0829 06:45:57.948976   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:45:58 ip-172-31-26-63 kubelet[26234]: I0829 06:45:58.020387   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:45:58 ip-172-31-26-63 kubelet[26234]: I0829 06:45:58.022642   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:45:58 ip-172-31-26-63 kubelet[26234]: E0829 06:45:58.024327   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:59532->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:45:58 ip-172-31-26-63 kubelet[26234]: W0829 06:45:58.964815   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:45:58 ip-172-31-26-63 kubelet[26234]: E0829 06:45:58.964939   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:00 ip-172-31-26-63 kubelet[26234]: I0829 06:46:00.761046   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:00 ip-172-31-26-63 kubelet[26234]: W0829 06:46:00.955024   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": an error on the server ("") has prevented the request from succeeding (get pods kube-apiserver-ip-172-31-26-63)
Aug 29 06:46:01 ip-172-31-26-63 kubelet[26234]: I0829 06:46:01.064329   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:46:01 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:01 ip-172-31-26-63 kubelet[26234]: I0829 06:46:01.064569   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:01 ip-172-31-26-63 kubelet[26234]: I0829 06:46:01.064681   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:46:01 ip-172-31-26-63 kubelet[26234]: E0829 06:46:01.064708   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:01 ip-172-31-26-63 kubelet[26234]: E0829 06:46:01.958131   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:03 ip-172-31-26-63 kubelet[26234]: E0829 06:46:03.919461   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:46:03 ip-172-31-26-63 kubelet[26234]: E0829 06:46:03.962080   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
Aug 29 06:46:03 ip-172-31-26-63 kubelet[26234]: E0829 06:46:03.962152   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
Aug 29 06:46:03 ip-172-31-26-63 kubelet[26234]: W0829 06:46:03.966351   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:03 ip-172-31-26-63 kubelet[26234]: E0829 06:46:03.966689   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:04 ip-172-31-26-63 kubelet[26234]: W0829 06:46:04.963630   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: EOF
Aug 29 06:46:05 ip-172-31-26-63 kubelet[26234]: I0829 06:46:05.024525   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:05 ip-172-31-26-63 kubelet[26234]: I0829 06:46:05.028379   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:05 ip-172-31-26-63 kubelet[26234]: E0829 06:46:05.029976   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:59766->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:05 ip-172-31-26-63 kubelet[26234]: E0829 06:46:05.370637   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:59768->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:07 ip-172-31-26-63 kubelet[26234]: I0829 06:46:07.761014   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:07 ip-172-31-26-63 kubelet[26234]: E0829 06:46:07.969434   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:07 ip-172-31-26-63 kubelet[26234]: E0829 06:46:07.970278   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:07 ip-172-31-26-63 kubelet[26234]: E0829 06:46:07.971006   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:46:08 ip-172-31-26-63 kubelet[26234]: I0829 06:46:08.063820   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:08 ip-172-31-26-63 kubelet[26234]: I0829 06:46:08.063943   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:08 ip-172-31-26-63 kubelet[26234]: I0829 06:46:08.064037   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:46:08 ip-172-31-26-63 kubelet[26234]: E0829 06:46:08.064063   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:08 ip-172-31-26-63 kubelet[26234]: W0829 06:46:08.967667   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:08 ip-172-31-26-63 kubelet[26234]: E0829 06:46:08.967915   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:12 ip-172-31-26-63 kubelet[26234]: I0829 06:46:12.030215   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:12 ip-172-31-26-63 kubelet[26234]: I0829 06:46:12.032456   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:12 ip-172-31-26-63 kubelet[26234]: E0829 06:46:12.034204   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:59976->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:12 ip-172-31-26-63 kubelet[26234]: I0829 06:46:12.761018   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:12 ip-172-31-26-63 kubelet[26234]: E0829 06:46:12.978855   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: I0829 06:46:13.063809   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: I0829 06:46:13.063911   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: I0829 06:46:13.064022   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: E0829 06:46:13.064057   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: E0829 06:46:13.920091   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: W0829 06:46:13.968812   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:13 ip-172-31-26-63 kubelet[26234]: E0829 06:46:13.969020   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:14 ip-172-31-26-63 kubelet[26234]: E0829 06:46:14.983741   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:14 ip-172-31-26-63 kubelet[26234]: E0829 06:46:14.984657   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:15 ip-172-31-26-63 kubelet[26234]: E0829 06:46:15.372849   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:60072->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:15 ip-172-31-26-63 kubelet[26234]: E0829 06:46:15.986541   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:16 ip-172-31-26-63 kubelet[26234]: I0829 06:46:16.760984   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:17 ip-172-31-26-63 kubelet[26234]: E0829 06:46:17.990707   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
Aug 29 06:46:18 ip-172-31-26-63 kubelet[26234]: W0829 06:46:18.969890   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:18 ip-172-31-26-63 kubelet[26234]: E0829 06:46:18.969995   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:19 ip-172-31-26-63 kubelet[26234]: I0829 06:46:19.034413   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:19 ip-172-31-26-63 kubelet[26234]: I0829 06:46:19.036458   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:19 ip-172-31-26-63 kubelet[26234]: E0829 06:46:19.038188   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:60192->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:21 ip-172-31-26-63 kubelet[26234]: I0829 06:46:21.761030   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:22 ip-172-31-26-63 kubelet[26234]: I0829 06:46:22.064465   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:22 ip-172-31-26-63 kubelet[26234]: I0829 06:46:22.064729   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:22 ip-172-31-26-63 kubelet[26234]: I0829 06:46:22.064821   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:46:22 ip-172-31-26-63 kubelet[26234]: E0829 06:46:22.064847   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:23 ip-172-31-26-63 kubelet[26234]: I0829 06:46:23.761027   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:23 ip-172-31-26-63 kubelet[26234]: E0829 06:46:23.920394   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:46:23 ip-172-31-26-63 kubelet[26234]: W0829 06:46:23.970931   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:23 ip-172-31-26-63 kubelet[26234]: E0829 06:46:23.971046   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:24 ip-172-31-26-63 kubelet[26234]: I0829 06:46:24.063986   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:46:24 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:24 ip-172-31-26-63 kubelet[26234]: I0829 06:46:24.065137   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:24 ip-172-31-26-63 kubelet[26234]: I0829 06:46:24.065269   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:46:24 ip-172-31-26-63 kubelet[26234]: E0829 06:46:24.065297   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:25 ip-172-31-26-63 kubelet[26234]: E0829 06:46:25.006186   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:25 ip-172-31-26-63 kubelet[26234]: E0829 06:46:25.006318   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:25 ip-172-31-26-63 kubelet[26234]: E0829 06:46:25.374930   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:60364->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:26 ip-172-31-26-63 kubelet[26234]: E0829 06:46:26.009231   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:26 ip-172-31-26-63 kubelet[26234]: I0829 06:46:26.038419   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:26 ip-172-31-26-63 kubelet[26234]: I0829 06:46:26.040663   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:26 ip-172-31-26-63 kubelet[26234]: E0829 06:46:26.042356   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:60394->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:28 ip-172-31-26-63 kubelet[26234]: E0829 06:46:28.015352   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
Aug 29 06:46:28 ip-172-31-26-63 kubelet[26234]: W0829 06:46:28.971929   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:28 ip-172-31-26-63 kubelet[26234]: E0829 06:46:28.972190   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:29 ip-172-31-26-63 kubelet[26234]: E0829 06:46:29.077024   26234 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: read tcp 172.31.26.63:60482->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:31 ip-172-31-26-63 kubelet[26234]: E0829 06:46:31.020607   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:46:31 ip-172-31-26-63 kubelet[26234]: E0829 06:46:31.020922   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:33 ip-172-31-26-63 kubelet[26234]: I0829 06:46:33.042572   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:33 ip-172-31-26-63 kubelet[26234]: I0829 06:46:33.044877   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:33 ip-172-31-26-63 kubelet[26234]: E0829 06:46:33.046564   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:60598->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:33 ip-172-31-26-63 kubelet[26234]: E0829 06:46:33.920659   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:46:33 ip-172-31-26-63 kubelet[26234]: W0829 06:46:33.973357   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:33 ip-172-31-26-63 kubelet[26234]: E0829 06:46:33.973470   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:34 ip-172-31-26-63 kubelet[26234]: I0829 06:46:34.761029   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: I0829 06:46:35.063836   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: I0829 06:46:35.063929   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: I0829 06:46:35.064041   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: E0829 06:46:35.064068   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: E0829 06:46:35.377027   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:60652->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: E0829 06:46:35.377058   26234 event.go:147] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-26-63.154f47c85f2db4c5", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-26-63", UID:"ip-172-31-26-63", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientDisk", Message:"Node ip-172-31-26-63 status is now: NodeHasSufficientDisk", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-26-63"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbed9adeef40126c5, ext:292282340, loc:(*time.Location)(0x5e0a580)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbed9adeef40126c5, ext:292282340, loc:(*time.Location)(0x5e0a580)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}' (retry limit exceeded!)
Aug 29 06:46:35 ip-172-31-26-63 kubelet[26234]: E0829 06:46:35.378713   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:60654->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:36 ip-172-31-26-63 kubelet[26234]: E0829 06:46:36.032361   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
Aug 29 06:46:36 ip-172-31-26-63 kubelet[26234]: E0829 06:46:36.487333   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:60682->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:36 ip-172-31-26-63 kubelet[26234]: I0829 06:46:36.761019   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:37 ip-172-31-26-63 kubelet[26234]: I0829 06:46:37.063654   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:37 ip-172-31-26-63 kubelet[26234]: I0829 06:46:37.063740   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:37 ip-172-31-26-63 kubelet[26234]: I0829 06:46:37.063821   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:46:37 ip-172-31-26-63 kubelet[26234]: E0829 06:46:37.063846   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:38 ip-172-31-26-63 kubelet[26234]: W0829 06:46:38.974430   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:38 ip-172-31-26-63 kubelet[26234]: E0829 06:46:38.974566   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:40 ip-172-31-26-63 kubelet[26234]: E0829 06:46:40.040829   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:46:40 ip-172-31-26-63 kubelet[26234]: E0829 06:46:40.040859   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:40 ip-172-31-26-63 kubelet[26234]: I0829 06:46:40.047979   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:40 ip-172-31-26-63 kubelet[26234]: I0829 06:46:40.050238   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:40 ip-172-31-26-63 kubelet[26234]: E0829 06:46:40.051746   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:60798->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:41 ip-172-31-26-63 kubelet[26234]: E0829 06:46:41.042901   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:42 ip-172-31-26-63 kubelet[26234]: E0829 06:46:42.045223   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:43 ip-172-31-26-63 kubelet[26234]: E0829 06:46:43.920961   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:46:43 ip-172-31-26-63 kubelet[26234]: W0829 06:46:43.975518   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:43 ip-172-31-26-63 kubelet[26234]: E0829 06:46:43.975643   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:46 ip-172-31-26-63 kubelet[26234]: E0829 06:46:46.489704   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:60970->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:47 ip-172-31-26-63 kubelet[26234]: I0829 06:46:47.051930   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:47 ip-172-31-26-63 kubelet[26234]: I0829 06:46:47.054763   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:47 ip-172-31-26-63 kubelet[26234]: E0829 06:46:47.057888   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:60996->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:47 ip-172-31-26-63 kubelet[26234]: I0829 06:46:47.761026   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: I0829 06:46:48.063813   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: I0829 06:46:48.063901   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: I0829 06:46:48.063986   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: E0829 06:46:48.064011   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: I0829 06:46:48.761004   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: W0829 06:46:48.976616   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:48 ip-172-31-26-63 kubelet[26234]: E0829 06:46:48.976855   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:49 ip-172-31-26-63 kubelet[26234]: I0829 06:46:49.063516   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:46:49 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:46:49 ip-172-31-26-63 kubelet[26234]: I0829 06:46:49.063598   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:49 ip-172-31-26-63 kubelet[26234]: I0829 06:46:49.063836   26234 kuberuntime_manager.go:767] Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)
Aug 29 06:46:49 ip-172-31-26-63 kubelet[26234]: E0829 06:46:49.063865   26234 pod_workers.go:186] Error syncing pod b2c5de70d739e7cfd9179479dd87a7fe ("kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"), skipping: failed to "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:46:50 ip-172-31-26-63 kubelet[26234]: E0829 06:46:50.066760   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
Aug 29 06:46:50 ip-172-31-26-63 kubelet[26234]: E0829 06:46:50.066848   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: an error on the server ("") has prevented the request from succeeding (get services)
Aug 29 06:46:51 ip-172-31-26-63 kubelet[26234]: E0829 06:46:51.069232   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:46:53 ip-172-31-26-63 kubelet[26234]: E0829 06:46:53.921259   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:46:53 ip-172-31-26-63 kubelet[26234]: W0829 06:46:53.977921   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:53 ip-172-31-26-63 kubelet[26234]: E0829 06:46:53.978038   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:46:54 ip-172-31-26-63 kubelet[26234]: I0829 06:46:54.058116   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:46:54 ip-172-31-26-63 kubelet[26234]: I0829 06:46:54.060295   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:46:54 ip-172-31-26-63 kubelet[26234]: E0829 06:46:54.062036   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:32940->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:46:56 ip-172-31-26-63 kubelet[26234]: E0829 06:46:56.491967   26234 event.go:212] Unable to write event: 'Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/default/events: read tcp 172.31.26.63:33026->172.31.29.249:6443: read: connection reset by peer' (may retry after sleeping)
Aug 29 06:46:57 ip-172-31-26-63 kubelet[26234]: E0829 06:46:57.080796   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:46:58 ip-172-31-26-63 kubelet[26234]: W0829 06:46:58.978931   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:46:58 ip-172-31-26-63 kubelet[26234]: E0829 06:46:58.979050   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:47:00 ip-172-31-26-63 kubelet[26234]: E0829 06:47:00.088715   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: an error on the server ("") has prevented the request from succeeding (get pods)
Aug 29 06:47:00 ip-172-31-26-63 kubelet[26234]: E0829 06:47:00.089033   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:47:01 ip-172-31-26-63 kubelet[26234]: I0829 06:47:01.062249   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:47:01 ip-172-31-26-63 kubelet[26234]: I0829 06:47:01.064990   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:47:01 ip-172-31-26-63 kubelet[26234]: E0829 06:47:01.066732   26234 kubelet_node_status.go:103] Unable to register node "ip-172-31-26-63" with API server: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes: read tcp 172.31.26.63:33138->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:47:01 ip-172-31-26-63 kubelet[26234]: E0829 06:47:01.076721   26234 certificate_manager.go:299] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://ip-172-31-29-249.eu-central-1.compute.internal:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: read tcp 172.31.26.63:33142->172.31.29.249:6443: read: connection reset by peer
Aug 29 06:47:01 ip-172-31-26-63 kubelet[26234]: E0829 06:47:01.091630   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: an error on the server ("") has prevented the request from succeeding (get nodes)
Aug 29 06:47:02 ip-172-31-26-63 kubelet[26234]: E0829 06:47:02.094554   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:47:02 ip-172-31-26-63 kubelet[26234]: I0829 06:47:02.761086   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: I0829 06:47:03.063530   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: I0829 06:47:03.063634   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: I0829 06:47:03.063718   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: E0829 06:47:03.063745   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: E0829 06:47:03.097702   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:455: Failed to list *v1.Service: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/services?limit=500&resourceVersion=0: EOF
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: I0829 06:47:03.762516   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: E0829 06:47:03.921441   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: W0829 06:47:03.979963   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:47:03 ip-172-31-26-63 kubelet[26234]: E0829 06:47:03.980082   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:47:04 ip-172-31-26-63 kubelet[26234]: I0829 06:47:04.065057   26234 kuberuntime_manager.go:513] Container {Name:kube-apiserver Image:k8s.gcr.io/kube-apiserver-amd64:v1.11.2 Command:[kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.31.26.63 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:250 scale:-3} d:{Dec:<nil>} s:250m Format:DecimalSI}]} VolumeMounts:[{Name:k8s-certs ReadOnly:true MountPath:/etc/kubernetes/pki SubPath: MountPropagation:<nil>} {Name:ca-certs ReadOnly:true MountPath:/etc/ssl/certs SubPath: MountPropagation:<nil>} {Name:usr-share-ca-certificates ReadOnly:true MountPath:/usr/share/ca-certificates SubPath: MountPropagation:<nil>} {Name:usr-local-share-ca-certificates ReadOnly:true MountPath:/usr/local/share/ca-certificates SubPath: Moun
Aug 29 06:47:04 ip-172-31-26-63 kubelet[26234]: tPropagation:<nil>} {Name:etc-ca-certificates ReadOnly:true MountPath:/etc/ca-certificates SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:6443,Host:172.31.26.63,Scheme:HTTPS,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:47:04 ip-172-31-26-63 kubelet[26234]: I0829 06:47:04.065140   26234 kuberuntime_manager.go:757] checking backoff for container "kube-apiserver" in pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)"
Aug 29 06:47:04 ip-172-31-26-63 kubelet[26234]: E0829 06:47:04.098590   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:464: Failed to list *v1.Node: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:47:04 ip-172-31-26-63 kubelet[26234]: E0829 06:47:04.098961   26234 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dip-172-31-26-63&limit=500&resourceVersion=0: EOF
Aug 29 06:47:04 ip-172-31-26-63 kubelet[26234]: I0829 06:47:04.769053   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:47:08 ip-172-31-26-63 kubelet[26234]: I0829 06:47:08.066953   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:47:08 ip-172-31-26-63 kubelet[26234]: I0829 06:47:08.074399   26234 kubelet_node_status.go:79] Attempting to register node ip-172-31-26-63
Aug 29 06:47:08 ip-172-31-26-63 kubelet[26234]: W0829 06:47:08.980988   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:47:08 ip-172-31-26-63 kubelet[26234]: E0829 06:47:08.981116   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:47:13 ip-172-31-26-63 kubelet[26234]: I0829 06:47:13.760993   26234 kubelet_node_status.go:269] Setting node annotation to enable volume controller attach/detach
Aug 29 06:47:13 ip-172-31-26-63 kubelet[26234]: E0829 06:47:13.921943   26234 eviction_manager.go:243] eviction manager: failed to get get summary stats: failed to get node info: node "ip-172-31-26-63" not found
Aug 29 06:47:13 ip-172-31-26-63 kubelet[26234]: W0829 06:47:13.982034   26234 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 29 06:47:13 ip-172-31-26-63 kubelet[26234]: E0829 06:47:13.982241   26234 kubelet.go:2110] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 29 06:47:14 ip-172-31-26-63 kubelet[26234]: I0829 06:47:14.063778   26234 kuberuntime_manager.go:513] Container {Name:etcd Image:k8s.gcr.io/etcd-amd64:3.2.18 Command:[etcd --advertise-client-urls=https://172.31.26.63:2379 --initial-advertise-peer-urls=https://172.31.26.63:2380 --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380 --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379 --listen-peer-urls=https://172.31.26.63:2380 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --key-file=/etc/kubernetes/pki/etcd/server.key --name=ip-172-31-26-63 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:etcd-data ReadOnly:false MountPath:/var/lib/etcd SubPath: MountPropagation:<nil>} {Name:etcd-certs ReadOnly:false MountPath:/etc/kubernetes/pki/etcd SubPath: MountPropagation:<nil>}] VolumeDevices:[] LivenessProbe:&Probe{Handler:Handler{Exec:&ExecAction{Command:[/bin/sh -ec ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key get foo],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:15,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:8,} ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
Aug 29 06:47:14 ip-172-31-26-63 kubelet[26234]: I0829 06:47:14.063889   26234 kuberuntime_manager.go:757] checking backoff for container "etcd" in pod "etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:47:14 ip-172-31-26-63 kubelet[26234]: I0829 06:47:14.063981   26234 kuberuntime_manager.go:767] Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)
Aug 29 06:47:14 ip-172-31-26-63 kubelet[26234]: E0829 06:47:14.064009   26234 pod_workers.go:186] Error syncing pod 35ef8723567aadce7dc2681dbc0bc230 ("etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-ip-172-31-26-63_kube-system(35ef8723567aadce7dc2681dbc0bc230)"
Aug 29 06:47:14 ip-172-31-26-63 kubelet[26234]: W0829 06:47:14.772911   26234 status_manager.go:482] Failed to get status for pod "kube-apiserver-ip-172-31-26-63_kube-system(b2c5de70d739e7cfd9179479dd87a7fe)": Get https://ip-172-31-29-249.eu-central-1.compute.internal:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-26-63: net/http: TLS handshake timeout

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 15 (8 by maintainers)

Most upvoted comments

@rosti Yes, I it works. I closing this issue. Thank you

Alleluia!

Init operation succeeded with [name: “ip-172-31-26-63.eu-central-1.compute.internal”] added.

Before I added “name” parameter, /etc/kubernetes/manifests/etcd.yaml was looking as the following:

metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://172.31.26.63:2379
    - --initial-advertise-peer-urls=https://172.31.26.63:2380
    - --initial-cluster=ip-172-31-26-63.eu-central-1.compute.internal=https://172.31.26.63:2380
    - --listen-client-urls=https://127.0.0.1:2379,https://172.31.26.63:2379
    - --listen-peer-urls=https://172.31.26.63:2380
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --key-file=/etc/kubernetes/pki/etcd/server.key
    - --name=ip-172-31-26-63
    - --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
    livenessProbe:
      exec:
        command:
        - /bin/sh
        - -ec
        - ETCDCTL_API=3 etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
          --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
          get foo
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate

Why I had to add “name” parameter? I followed the official guide and I believe my deployment is standard. Is there a bug or official documentation missing this step?

BTW - setup prints me this warning:

[endpoint] WARNING: port specified in api.controlPlaneEndpoint overrides api.bindPort in the controlplane address

Shall I do anything about this?