kops: Nodes fail to join cluster during update to v1.22.3
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
Version 1.22.3 (git-241bfeba5931838fd32f2260aff41dd89a585fba)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: version.Info{Major:“1”, Minor:“21”, GitVersion:“v1.21.5”, GitCommit:“aea7bbadd2fc0cd689de94a54e5b7b758869d691”, GitTreeState:“clean”, BuildDate:“2021-09-15T21:10:45Z”, GoVersion:“go1.16.8”, Compiler:“gc”, Platform:“darwin/arm64”} Server Version: version.Info{Major:“1”, Minor:“22”, GitVersion:“v1.22.5”, GitCommit:“5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e”, GitTreeState:“clean”, BuildDate:“2021-12-16T08:32:32Z”, GoVersion:“go1.16.12”, Compiler:“gc”, Platform:“linux/amd64”}
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue? Upgrade a (freshly created) v1.22.2 cluster to v1.22.3
5. What happened after the commands executed?
kops rolling-update cluster --name debug.k8s.xxx --yes
NAME STATUS NEEDUPDATE READY MIN TARGET MAX NODES
master-eu-west-1a NeedsUpdate 1 0 1 1 1 1
master-eu-west-1b NeedsUpdate 1 0 1 1 1 1
master-eu-west-1c NeedsUpdate 1 0 1 1 1 1
nodes-eu-west-1a NeedsUpdate 1 0 1 1 18 1
I0118 14:43:09.151938 57091 instancegroups.go:468] Validating the cluster.
I0118 14:43:10.698777 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": system-node-critical pod "canal-sxhzt" is pending.
I0118 14:43:42.276409 57091 instancegroups.go:501] Cluster validated.
I0118 14:43:42.276441 57091 instancegroups.go:309] Tainting 1 node in "master-eu-west-1a" instancegroup.
I0118 14:43:42.329801 57091 instancegroups.go:398] Draining the node: "ip-172-21-23-11.eu-west-1.compute.internal".
WARNING: ignoring DaemonSet-managed Pods: kube-system/canal-97td6, kube-system/ebs-csi-node-gtcz7, kube-system/kops-controller-b5j7s
evicting pod kube-system/ebs-csi-controller-6d77db8bf5-6mrjt
evicting pod kube-system/aws-node-termination-handler-b9dd66b74-k7knr
evicting pod kube-system/cluster-autoscaler-6b59b997d-n9c8f
I0118 14:44:13.766391 57091 instancegroups.go:656] Waiting for 5s for pods to stabilize after draining.
I0118 14:44:18.766825 57091 instancegroups.go:417] deleting node "ip-172-21-23-11.eu-west-1.compute.internal" from kubernetes
I0118 14:44:18.804204 57091 instancegroups.go:589] Stopping instance "i-0ad91624557825b4d", node "ip-172-21-23-11.eu-west-1.compute.internal", in group "master-eu-west-1a.masters.debug.k8s.xxx" (this may take a while).
I0118 14:44:18.954446 57091 instancegroups.go:435] waiting for 15s after terminating instance
I0118 14:44:33.955323 57091 instancegroups.go:468] Validating the cluster.
I0118 14:44:35.620995 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:45:07.067841 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:45:38.815990 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:46:10.294351 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": InstanceGroup "master-eu-west-1a" did not have enough nodes 0 vs 1.
I0118 14:46:42.046865 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
I0118 14:47:13.883391 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
I0118 14:47:45.788972 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
I0118 14:48:17.614813 57091 instancegroups.go:524] Cluster did not pass validation, will retry in "30s": machine "i-0524ca9a89f016541" has not yet joined cluster.
6. What did you expect to happen? Proper cluster update without errors
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
---
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
name: debug.k8s.xxx
spec:
api:
loadBalancer:
class: Classic
type: Internal
authorization:
rbac: {}
channel: stable
cloudLabels:
Environment: debug
Owner: cloud-ops
cloudProvider: aws
configBase: s3://xxx/debug.k8s.xxx
containerRuntime: containerd
clusterAutoscaler:
enabled: true
balanceSimilarNodeGroups: true
scaleDownUtilizationThreshold: "0.8"
skipNodesWithLocalStorage: false
cpuRequest: "100m"
memoryRequest: "800Mi"
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-eu-west-1a
name: a
- encryptedVolume: true
instanceGroup: master-eu-west-1b
name: b
- encryptedVolume: true
instanceGroup: master-eu-west-1c
name: c
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-eu-west-1a
name: a
- encryptedVolume: true
instanceGroup: master-eu-west-1b
name: b
- encryptedVolume: true
instanceGroup: master-eu-west-1c
name: c
memoryRequest: 100Mi
name: events
externalPolicies:
node:
- arn:aws:iam::xxx:policy/tf-kops-debug-node
iam:
allowContainerRegistry: true
legacy: false
kubeAPIServer:
featureGates:
TTLAfterFinished: "true"
kubeControllerManager:
featureGates:
TTLAfterFinished: "true"
kubeDNS:
nodeLocalDNS:
enabled: false
provider: CoreDNS
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
cpuCFSQuota: false
kubernetesApiAccess:
- 172.16.0.0/22
- 172.16.12.0/22
- 172.21.60.165/32
- 172.21.60.47/32
kubernetesVersion: 1.22.5
masterInternalName: api.internal.debug.k8s.xxx
masterPublicName: api.debug.k8s.xxx
networkCIDR: 172.21.0.0/16
networkID: vpc-xxx
networking:
canal: {}
nodeTerminationHandler:
enabled: true
enableSQSTerminationDraining: true
managedASGTag: kubernetes.io/cluster/debug.k8s.xxx
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 172.16.0.0/22
- 172.21.60.47/32
subnets:
- cidr: 172.21.23.0/24
egress: nat-xxx
id: subnet-xxx
name: eu-west-1a
type: Private
zone: eu-west-1a
- cidr: 172.21.24.0/24
egress: nat-xxx
id: subnet-61168d17
name: eu-west-1b
type: Private
zone: eu-west-1b
- cidr: 172.21.25.0/24
egress: nat-xxx
id: subnet-xxx
name: eu-west-1c
type: Private
zone: eu-west-1c
- cidr: 172.21.20.0/24
id: subnet-xxx
name: utility-eu-west-1a
type: Utility
zone: eu-west-1a
- cidr: 172.21.21.0/24
id: subnet-xxx
name: utility-eu-west-1b
type: Utility
zone: eu-west-1b
- cidr: 172.21.22.0/24
id: subnet-xxx
name: utility-eu-west-1c
type: Utility
zone: eu-west-1c
topology:
dns:
type: Public
masters: private
nodes: private
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: debug.k8s.xxx
name: master-eu-west-1a
spec:
additionalSecurityGroups: [sg-xxx]
cloudLabels:
Environment: debug
Owner: cloud-ops
image: 075585003325/Flatcar-stable-3033.2.0-hvm
machineType: m5.2xlarge
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-west-1a
role: Master
subnets:
- eu-west-1a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: debug.k8s.xxx
name: master-eu-west-1b
spec:
additionalSecurityGroups: [sg-xxx]
cloudLabels:
Environment: debug
Owner: cloud-ops
image: 075585003325/Flatcar-stable-3033.2.0-hvm
machineType: m5.2xlarge
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-west-1b
role: Master
subnets:
- eu-west-1b
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: debug.k8s.xxx
name: master-eu-west-1c
spec:
additionalSecurityGroups: [sg-xxx]
cloudLabels:
Environment: debug
Owner: cloud-ops
image: 075585003325/Flatcar-stable-3033.2.0-hvm
machineType: m5.2xlarge
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-west-1c
role: Master
subnets:
- eu-west-1c
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: debug.k8s.xxx
name: nodes-eu-west-1a
spec:
additionalSecurityGroups: [sg-xxx]
cloudLabels:
Environment: debug
Owner: cloud-ops
k8s.io/cluster-autoscaler/enabled: ""
k8s.io/cluster-autoscaler/debug: ""
image: 075585003325/Flatcar-stable-3033.2.0-hvm
machineType: m5.xlarge
maxSize: 18
minSize: 2
nodeLabels:
kops.k8s.io/instancegroup: nodes-eu-west-1a
type: node
role: Node
subnets:
- eu-west-1a
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know? Might be related to issue #13116.
Journalctl (partial) log of the instance which is not able to join the cluster (starting at first error E0118):
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577519 2224 server.go:1006] "Cloud provider determined current node" nodeName="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577537 2224 server.go:1148] "Using root directory" path="/var/lib/kubelet"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577594 2224 kubelet.go:418] "Attempting to sync node with API server"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577607 2224 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577622 2224 file.go:68] "Watching path" path="/etc/kubernetes/manifests"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577633 2224 kubelet.go:290] "Adding apiserver pod source"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.577655 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.578696 2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://127.0.0.1/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-21-23-101.eu-west-1.compute.internal&limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.579233 2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://127.0.0.1/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.581615 2224 kuberuntime_manager.go:245] "Container runtime initialized" containerRuntime="containerd" version="1.5.8" apiVersion="v1alpha2"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: W0118 11:53:58.581906 2224 probe.go:268] Flexvolume plugin directory at /var/lib/kubelet/volumeplugins/ does not exist. Recreating.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582065 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/gce-pd"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582085 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/cinder"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582098 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-disk"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582109 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/azure-file"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582121 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/vsphere-volume"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582137 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582149 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/git-repo"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582160 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/host-path"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582172 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/nfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582185 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/secret"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582207 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/iscsi"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582219 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/glusterfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582233 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/rbd"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582246 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/quobyte"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582257 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/cephfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582269 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/downward-api"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582283 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/fc"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582294 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/flocker"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582306 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/configmap"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582318 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/projected"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582340 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582352 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/local-volume"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582363 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/storageos"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582398 2224 plugins.go:641] "Loaded volume plugin" pluginName="kubernetes.io/csi"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582508 2224 server.go:1213] "Started kubelet"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582585 2224 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.582808 2224 server.go:176] "Starting to listen read-only" address="0.0.0.0" port=10255
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.583148 2224 csi_plugin.go:1057] Failed to contact API server when waiting for CSINode publishing: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csinodes/ip-172-21-23-101.eu-west-1.compute.internal": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.583158 2224 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-21-23-101.eu-west-1.compute.internal.16cb5b446edbc9bd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-21-23-101.eu-west-1.compute.internal", UID:"ip-172-21-23-101.eu-west-1.compute.internal", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-21-23-101.eu-west-1.compute.internal"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc071c875a2b7edbd, ext:7003076296, loc:(*time.Location)(0x77b0760)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc071c875a2b7edbd, ext:7003076296, loc:(*time.Location)(0x77b0760)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://127.0.0.1/api/v1/namespaces/default/events": dial tcp 127.0.0.1:443: connect: connection refused'(may retry after sleeping)
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.583522 2224 server.go:409] "Adding debug handlers to kubelet server"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped).
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585799 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.585807 2224 cri_stats_provider.go:372] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.585836 2224 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585850 2224 volume_manager.go:289] "The desired_state_of_world populator starts"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585861 2224 volume_manager.go:291] "Starting Kubelet Volume Manager"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.585942 2224 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.586164 2224 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-21-23-101.eu-west-1.compute.internal?timeout=10s": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.586728 2224 kubelet.go:2337] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.587253 2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://127.0.0.1/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.587723 2224 factory.go:137] Registering containerd factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.587802 2224 factory.go:55] Registering systemd factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603212 2224 factory.go:372] Registering Docker factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603268 2224 factory.go:101] Registering Raw factory
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603302 2224 manager.go:1203] Started watching for new ooms in manager
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.603718 2224 manager.go:301] Starting recovery of all containers
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.609970 2224 manager.go:306] Recovery completed
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.621301 2224 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.643154 2224 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.643177 2224 status_manager.go:158] "Starting to sync pod status with apiserver"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.643191 2224 kubelet.go:1967] "Starting kubelet main sync loop"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.643245 2224 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.643849 2224 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://127.0.0.1/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649012 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649035 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649044 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649053 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649060 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649068 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.649076 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686184 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686211 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.686212 2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686221 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686235 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686243 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686252 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.686259 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.743353 2224 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.786359 2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.786687 2224 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-21-23-101.eu-west-1.compute.internal?timeout=10s": dial tcp 127.0.0.1:443: connect: connection refused
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845244 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845283 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845295 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845244 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845357 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845373 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845397 2224 kubelet_node_status.go:71] "Attempting to register node" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845729 2224 cpu_manager.go:209] "Starting CPU manager" policy="none"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.845743 2224 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://127.0.0.1/api/v1/nodes\": dial tcp 127.0.0.1:443: connect: connection refused" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845748 2224 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.845767 2224 state_mem.go:36] "Initialized new in-memory state store"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.848752 2224 policy_none.go:49] "None policy: Start"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.849077 2224 memory_manager.go:168] "Starting memorymanager" policy="None"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.849099 2224 state_mem.go:35] "Initializing new in-memory state store"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-besteffort.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880211 2224 manager.go:245] "Starting Device Plugin manager"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880272 2224 manager.go:609] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880451 2224 manager.go:287] "Serving device plugin registration server on socket" path="/var/lib/kubelet/device-plugins/kubelet.sock"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880530 2224 plugin_watcher.go:52] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880622 2224 plugin_manager.go:112] "The desired_state_of_world populator (plugin watcher) starts"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.880636 2224 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.881013 2224 eviction_manager.go:255] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.886884 2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944210 2224 kubelet.go:2053] "SyncLoop ADD" source="file" pods=[kube-system/etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal kube-system/etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal kube-system/kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal]
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944264 2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944297 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944315 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944326 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944335 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944344 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944353 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.944361 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960321 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960353 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960365 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960434 2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960461 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960472 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960482 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960491 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960498 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960507 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960514 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960606 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960628 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960642 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960651 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960659 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960669 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.960676 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975731 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975773 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975790 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975881 2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975911 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975923 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975932 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975940 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975947 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975956 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.975963 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976076 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976096 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976109 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976122 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976133 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976145 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976156 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976164 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976191 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976207 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.976682 2224 status_manager.go:601] "Failed to get status for pod" podUID=9eb4446c47f04b03ac89adf2bdc97326 pod="kube-system/etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod9eb4446c47f04b03ac89adf2bdc97326.slice.
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:58.987327 2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987441 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-run\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987496 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudconfig\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-cloudconfig\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987534 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-logfile\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987587 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvkapi\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvkapi\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987634 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-secrets\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-healthcheck-secrets\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987674 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-varlogetcd\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987711 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-varlogetcd\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987752 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usrshareca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-usrshareca-certificates\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987825 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-rootfs\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987874 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-pki\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987909 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-rootfs\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.987972 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcssl\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcssl\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988027 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkitls\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkitls\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988066 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkica-trust\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkica-trust\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988103 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetesca\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-kubernetesca\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988138 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvsshproxy\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvsshproxy\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988172 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-run\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.988204 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-pki\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991847 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991877 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991881 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991889 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991905 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991919 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.991977 2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992022 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992033 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992042 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992050 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992058 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992066 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992067 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992082 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992085 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992101 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992114 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992125 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992133 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992141 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:58 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:58.992297 2224 status_manager.go:601] "Failed to get status for pod" podUID=6a8ab1f587e4e906d109d5c2ce7aeaec pod="kube-system/etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod6a8ab1f587e4e906d109d5c2ce7aeaec.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008271 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008299 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008310 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008399 2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008424 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008438 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008449 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008458 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008465 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008474 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008480 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008481 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008497 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008507 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008518 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008526 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008534 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008579 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008631 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008664 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.008682 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.009171 2224 status_manager.go:601] "Failed to get status for pod" podUID=b3a03e31b3a1405e5ef70661b08e2e1d pod="kube-system/kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024601 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024631 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024641 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024729 2224 topology_manager.go:200] "Topology Admit Handler"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024755 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024766 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024775 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024783 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024791 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024810 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024818 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024893 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024916 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024931 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024931 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024944 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024955 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024971 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024956 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024984 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.024994 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.025376 2224 status_manager.go:601] "Failed to get status for pod" podUID=207483a4a41b602af202284e46394181 pod="kube-system/kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-podb3a03e31b3a1405e5ef70661b08e2e1d.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod207483a4a41b602af202284e46394181.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042387 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042423 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042439 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042463 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042486 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042501 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042713 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042734 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042747 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042759 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042769 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042784 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042796 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.042826 2224 status_manager.go:601] "Failed to get status for pod" podUID=1307b8791492862e49797fda5735eae1 pod="kube-system/kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046656 2224 kubelet_node_status.go:362] "Setting node annotation to enable volume controller attach/detach"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046682 2224 kubelet_node_status.go:410] "Adding label from cloud provider" labelKey="beta.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046692 2224 kubelet_node_status.go:412] "Adding node label from cloud provider" labelKey="node.kubernetes.io/instance-type" labelValue="m5.2xlarge"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046701 2224 kubelet_node_status.go:423] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046709 2224 kubelet_node_status.go:425] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/zone" labelValue="eu-west-1a"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046717 2224 kubelet_node_status.go:429] "Adding node label from cloud provider" labelKey="failure-domain.beta.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.046725 2224 kubelet_node_status.go:431] "Adding node label from cloud provider" labelKey="topology.kubernetes.io/region" labelValue="eu-west-1"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.057650 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.057681 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.057692 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.058045 2224 status_manager.go:601] "Failed to get status for pod" podUID=1eb5c698134cf5bd61561bc378175f09 pod="kube-system/kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal" err="Get \"https://127.0.0.1/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\": dial tcp 127.0.0.1:443: connect: connection refused"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061139 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientMemory"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061178 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasNoDiskPressure"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061194 2224 kubelet_node_status.go:554] "Recording event message for node" node="ip-172-21-23-101.eu-west-1.compute.internal" event="NodeHasSufficientPID"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.061217 2224 kubelet_node_status.go:71] "Attempting to register node" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod1307b8791492862e49797fda5735eae1.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal systemd[1]: Created slice libcontainer container kubepods-burstable-pod1eb5c698134cf5bd61561bc378175f09.slice.
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:59.087739 2224 kubelet.go:2412] "Error getting node" err="node \"ip-172-21-23-101.eu-west-1.compute.internal\" not found"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088846 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-pki\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088884 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"usrshareca-certificates\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-usrshareca-certificates\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088910 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkitls\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-etcpkitls\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088934 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvkcm\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-srvkcm\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088948 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "usrshareca-certificates" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-usrshareca-certificates") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088959 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volplugins\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-volplugins\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088953 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "pki" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-pki") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.088983 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"pki\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-pki\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089018 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "pki" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-pki") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089024 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcpkica-trust\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkica-trust\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089047 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "etcpkica-trust" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkica-trust") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089096 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"srvsshproxy\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvsshproxy\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089120 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "srvsshproxy" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvsshproxy") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089155 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-kubeconfig\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089186 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-logfile\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089226 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlibkcm\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-varlibkcm\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089260 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-run\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089296 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "run" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-run") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089326 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-varlogetcd\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089414 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"varlogetcd\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-varlogetcd\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089418 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-varlogetcd") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089462 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-logfile\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089483 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "varlogetcd" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-varlogetcd") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089492 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"healthcheck-secrets\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-healthcheck-secrets\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089513 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/1eb5c698134cf5bd61561bc378175f09-logfile\") pod \"kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1eb5c698134cf5bd61561bc378175f09\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089566 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usrshareca-certificates\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-usrshareca-certificates\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089571 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "logfile" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-logfile") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089591 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "healthcheck-secrets" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-healthcheck-secrets") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089600 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudconfig\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-cloudconfig\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089637 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-rootfs\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089660 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srvscheduler\" (UniqueName: \"kubernetes.io/host-path/1eb5c698134cf5bd61561bc378175f09-srvscheduler\") pod \"kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1eb5c698134cf5bd61561bc378175f09\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089672 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "rootfs" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-rootfs") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089691 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-run\") pod \"etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"9eb4446c47f04b03ac89adf2bdc97326\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089725 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-rootfs\") pod \"etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"6a8ab1f587e4e906d109d5c2ce7aeaec\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089736 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "run" (UniqueName: "kubernetes.io/host-path/9eb4446c47f04b03ac89adf2bdc97326-run") pod "etcd-manager-events-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "9eb4446c47f04b03ac89adf2bdc97326")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089749 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcssl\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcssl\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089774 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "rootfs" (UniqueName: "kubernetes.io/host-path/6a8ab1f587e4e906d109d5c2ce7aeaec-rootfs") pod "etcd-manager-main-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "6a8ab1f587e4e906d109d5c2ce7aeaec")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089780 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "etcssl" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcssl") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089775 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"etcpkitls\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkitls\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089797 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "etcpkitls" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-etcpkitls") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089818 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"kubernetesca\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-kubernetesca\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089838 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logfile\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-logfile\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089856 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "kubernetesca" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-kubernetesca") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089863 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varlibkubescheduler\" (UniqueName: \"kubernetes.io/host-path/1eb5c698134cf5bd61561bc378175f09-varlibkubescheduler\") pod \"kube-scheduler-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1eb5c698134cf5bd61561bc378175f09\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089888 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"cloudconfig\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-cloudconfig\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089908 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"modules\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-modules\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089914 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "cloudconfig" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-cloudconfig") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089927 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptableslock\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-iptableslock\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089949 2224 reconciler.go:269] "operationExecutor.MountVolume started for volume \"srvkapi\" (UniqueName: \"kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvkapi\") pod \"kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"b3a03e31b3a1405e5ef70661b08e2e1d\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089970 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-hosts\" (UniqueName: \"kubernetes.io/host-path/1307b8791492862e49797fda5735eae1-ssl-certs-hosts\") pod \"kube-proxy-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"1307b8791492862e49797fda5735eae1\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.089988 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcssl\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-etcssl\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.090001 2224 operation_generator.go:713] MountVolume.SetUp succeeded for volume "srvkapi" (UniqueName: "kubernetes.io/host-path/b3a03e31b3a1405e5ef70661b08e2e1d-srvkapi") pod "kube-apiserver-ip-172-21-23-101.eu-west-1.compute.internal" (UID: "b3a03e31b3a1405e5ef70661b08e2e1d")
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.090010 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcpkica-trust\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-etcpkica-trust\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: I0118 11:53:59.090034 2224 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cabundle\" (UniqueName: \"kubernetes.io/host-path/207483a4a41b602af202284e46394181-cabundle\") pod \"kube-controller-manager-ip-172-21-23-101.eu-west-1.compute.internal\" (UID: \"207483a4a41b602af202284e46394181\") "
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:59.179260 2224 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://127.0.0.1/api/v1/nodes\": dial tcp 127.0.0.1:443: connect: connection refused" node="ip-172-21-23-101.eu-west-1.compute.internal"
Jan 18 11:53:59 ip-172-21-23-101.eu-west-1.compute.internal kubelet[2224]: E0118 11:53:59.187695 2224 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: Get "https://127.0.0.1/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-21-23-101.eu-west-1.compute.internal?timeout=10s": dial tcp 127.0.0.1:443: connect: connection refused
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 17 (13 by maintainers)
There are many. See https://testgrid.k8s.io/kops-misc
But for various reasons they didn’t catch this one. Plan on remedying that.