minikube: Docker build hangs on apt-key add command

For some reason Minikube is unable to build the following Dockerfile:

FROM ruby:2.4.5-jessie
RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -

It will hang 10+ minutes on the RUN statement. SSH’ing into the machine

minikube ssh
top -o %CPU

shows that apt-config shell APT_DIR Dir is consuming 100% of 1 core of CPU.

Building the same Dockerfile in Docker for Mac finishes after ~10 seconds 😅.

The exact command to reproduce the issue:

eval $(minikube docker-env)
docker build --tag test -f Dockerfile.test tmp

The full output of the command that failed:

❯ docker build --tag test -f Dockerfile.test tmp
[+] Building 131.5s (4/5)
 => [internal] load .dockerignore                                                                                                                                                  0.0s
 => => transferring context: 2B                                                                                                                                                    0.0s
 => [internal] load build definition from Dockerfile.test                                                                                                                          0.0s
 => => transferring dockerfile: 160B                                                                                                                                               0.0s
 => [internal] load metadata for docker.io/library/ruby:2.4.5-jessie                                                                                                               0.0s
 => CACHED [1/2] FROM docker.io/library/ruby:2.4.5-jessie                                                                                                                          0.0s
 => [2/2] RUN wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -                                                                               548.7s

The output of the minikube logs command:

==> Docker <== – Logs begin at Thu 2019-12-05 12:44:31 UTC, end at Thu 2019-12-05 12:54:11 UTC. – Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.708056543Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.739956748Z” level=info msg=“[graphdriver] using prior storage driver: overlay2” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915435670Z” level=warning msg=“Your kernel does not support cgroup blkio weight” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915469737Z” level=warning msg=“Your kernel does not support cgroup blkio weight_device” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915477700Z” level=warning msg=“Your kernel does not support cgroup blkio throttle.read_bps_device” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915482343Z” level=warning msg=“Your kernel does not support cgroup blkio throttle.write_bps_device” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915487017Z” level=warning msg=“Your kernel does not support cgroup blkio throttle.read_iops_device” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915491379Z” level=warning msg=“Your kernel does not support cgroup blkio throttle.write_iops_device” Dec 05 12:44:47 development dockerd[5120]: time=“2019-12-05T12:44:47.915711005Z” level=info msg=“Loading containers: start.” Dec 05 12:44:48 development dockerd[5120]: time=“2019-12-05T12:44:48.081313544Z” level=info msg=“Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address” Dec 05 12:44:48 development dockerd[5120]: time=“2019-12-05T12:44:48.117579198Z” level=info msg=“Loading containers: done.” Dec 05 12:44:48 development dockerd[5120]: time=“2019-12-05T12:44:48.166360148Z” level=info msg=“Docker daemon” commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5 Dec 05 12:44:48 development dockerd[5120]: time=“2019-12-05T12:44:48.166892133Z” level=info msg=“Daemon has completed initialization” Dec 05 12:44:48 development dockerd[5120]: time=“2019-12-05T12:44:48.196045103Z” level=info msg=“API listen on /var/run/docker.sock” Dec 05 12:44:48 development dockerd[5120]: time=“2019-12-05T12:44:48.196058890Z” level=info msg=“API listen on [::]:2376” Dec 05 12:44:48 development systemd[1]: Started Docker Application Container Engine. Dec 05 12:45:15 development dockerd[5120]: time=“2019-12-05T12:45:15.817744337Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/85ca1c3578115b91d4d2ceca23900aaf60e442c0c80a694f655b73ff57958bf1/shim.sock” debug=false pid=6499 Dec 05 12:45:15 development dockerd[5120]: time=“2019-12-05T12:45:15.819034981Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/c4769f241eaac216242aa8235ee684b6205977748c1d7167cbedf102291b1e80/shim.sock” debug=false pid=6503 Dec 05 12:45:15 development dockerd[5120]: time=“2019-12-05T12:45:15.845783570Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/ed6e159fe61287c5f9084f1943495d70d436f83dba45675a6ebbde8a5d7aa8ce/shim.sock” debug=false pid=6536 Dec 05 12:45:15 development dockerd[5120]: time=“2019-12-05T12:45:15.964567522Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/0a9fc7d0f1751b09d2525036dcb1850dfcb8caeca8872a2f13b34ed1b195061a/shim.sock” debug=false pid=6571 Dec 05 12:45:15 development dockerd[5120]: time=“2019-12-05T12:45:15.977707008Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/a1c237c0ccd0c7c9eeecf355a8ffc6508429072f311e96b645a9f4f2fd676c78/shim.sock” debug=false pid=6592 Dec 05 12:45:16 development dockerd[5120]: time=“2019-12-05T12:45:16.291208859Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/8ae6092946dfa96e38bb74cc8079617bc9345ba56550feeee0a0e99d40a58b13/shim.sock” debug=false pid=6773 Dec 05 12:45:16 development dockerd[5120]: time=“2019-12-05T12:45:16.295685842Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/0f1f51e87243e57ace8955d2501f14e905723289749a1abbe9795dac755d2d3a/shim.sock” debug=false pid=6789 Dec 05 12:45:16 development dockerd[5120]: time=“2019-12-05T12:45:16.296711459Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/47682076abebdf6c6f6df2590598aca77912f1744e45e71fa980e4e6c163bf99/shim.sock” debug=false pid=6790 Dec 05 12:45:16 development dockerd[5120]: time=“2019-12-05T12:45:16.297765253Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/2635c7184c31bda796f52f0ccb6a59c6dca998deaa2e6f58f978bc7e67bd52f9/shim.sock” debug=false pid=6794 Dec 05 12:45:16 development dockerd[5120]: time=“2019-12-05T12:45:16.315096788Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/c36f5378a1b87d7fe80f647e6a139f637c4ceaccc2eb07a5f1018c1e6b82e960/shim.sock” debug=false pid=6850 Dec 05 12:45:21 development dockerd[5120]: time=“2019-12-05T12:45:21.718842185Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/c7fbb0634b303b455a3a264336e4192f5d92449d6bdfb946a9c109e9a7c3efcb/shim.sock” debug=false pid=7484 Dec 05 12:45:21 development dockerd[5120]: time=“2019-12-05T12:45:21.728681763Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/870d789b8d1f4dfbfcd588fb28a73f76c106a08217c0457431f5c333c9bc5f2e/shim.sock” debug=false pid=7494 Dec 05 12:45:21 development dockerd[5120]: time=“2019-12-05T12:45:21.781380891Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/11a14528716739b3d9a45a50df3a08acb9025d48e562bfafefec75bff3616692/shim.sock” debug=false pid=7525 Dec 05 12:45:21 development dockerd[5120]: time=“2019-12-05T12:45:21.877279687Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/2f825d31868f22be426984cf1a22ef85708563176264e14f28f06f58f93a26a5/shim.sock” debug=false pid=7561 Dec 05 12:45:21 development dockerd[5120]: time=“2019-12-05T12:45:21.930670293Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/1ce7ab3aa409a217b54adee61f49b8b09a4a3c01894d61fa25ac866b76124152/shim.sock” debug=false pid=7580 Dec 05 12:45:21 development dockerd[5120]: time=“2019-12-05T12:45:21.931972142Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/e8992e1c5c8dd33e23be094d5b2cc0ad334dc364ac5fe08d89944115c8f1f4f5/shim.sock” debug=false pid=7586 Dec 05 12:45:23 development dockerd[5120]: time=“2019-12-05T12:45:23.023267238Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8/shim.sock” debug=false pid=7828 Dec 05 12:45:23 development dockerd[5120]: time=“2019-12-05T12:45:23.049545775Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/edbb424ad0fc6c678a1e8c6b3b33461ab41b1213683cbe62f2257c920ee68115/shim.sock” debug=false pid=7866 Dec 05 12:45:23 development dockerd[5120]: time=“2019-12-05T12:45:23.144109428Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/91aee06c8fbf3e9b432e2f57864b8e19b33a8e502a16c5c79d3c54517f3ae73d/shim.sock” debug=false pid=7964 Dec 05 12:45:23 development dockerd[5120]: time=“2019-12-05T12:45:23.232561119Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/ae491c71db38f59a84303079746e5bb786560b6460f148019f12bc224902c144/shim.sock” debug=false pid=8040 Dec 05 12:45:23 development dockerd[5120]: time=“2019-12-05T12:45:23.251713951Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/94a394904f6c8e3e85f84370a5a6905afa12b21187f6eb4f8f1088c9fc624423/shim.sock” debug=false pid=8056 Dec 05 12:45:23 development dockerd[5120]: time=“2019-12-05T12:45:23.275747438Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/5cdd5a2f2515acbe9d04425bc961276a84d215d1a0f215e352f317f072e002ec/shim.sock” debug=false pid=8080 Dec 05 12:45:36 development dockerd[5120]: time=“2019-12-05T12:45:36.172039476Z” level=info msg=“parsed scheme: ""” module=grpc Dec 05 12:45:36 development dockerd[5120]: time=“2019-12-05T12:45:36.172094858Z” level=info msg=“scheme "" not registered, fallback to default scheme” module=grpc Dec 05 12:45:36 development dockerd[5120]: time=“2019-12-05T12:45:36.172120953Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{ 0 <nil>}] <nil>}” module=grpc Dec 05 12:45:36 development dockerd[5120]: time=“2019-12-05T12:45:36.172133546Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Dec 05 12:45:44 development dockerd[5120]: time=“2019-12-05T12:45:44.732225009Z” level=warning msg=“grpc: addrConn.createTransport failed to connect to { 0 <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting…” module=grpc Dec 05 12:45:53 development dockerd[5120]: time=“2019-12-05T12:45:53.685597143Z” level=info msg=“shim reaped” id=d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8 Dec 05 12:45:53 development dockerd[5120]: time=“2019-12-05T12:45:53.696066866Z” level=info msg=“ignoring event” module=libcontainerd namespace=moby topic=/tasks/delete type=“*events.TaskDelete” Dec 05 12:45:53 development dockerd[5120]: time=“2019-12-05T12:45:53.696202166Z” level=warning msg=“d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d68791efff234f5aac3b6f2c968c111b697ccd24551b80f49339147769ea3bc8/mounts/shm, flags: 0x2: no such file or directory” Dec 05 12:46:08 development dockerd[5120]: time=“2019-12-05T12:46:08.101081701Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/7684a0b5cb3b5bde9f5b4cfcac1adabdcbb78dc5e4f7ccd6dc4144ec15bb099e/shim.sock” debug=false pid=9335 Dec 05 12:47:25 development dockerd[5120]: time=“2019-12-05T12:47:25.156405999Z” level=info msg=“parsed scheme: ""” module=grpc Dec 05 12:47:25 development dockerd[5120]: time=“2019-12-05T12:47:25.156454708Z” level=info msg=“scheme "" not registered, fallback to default scheme” module=grpc Dec 05 12:47:25 development dockerd[5120]: time=“2019-12-05T12:47:25.156474300Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{ 0 <nil>}] <nil>}” module=grpc Dec 05 12:47:25 development dockerd[5120]: time=“2019-12-05T12:47:25.156482571Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Dec 05 12:47:32 development dockerd[5120]: time=“2019-12-05T12:47:32.851185772Z” level=warning msg=“grpc: addrConn.createTransport failed to connect to { 0 <nil>}. Err :connection error: desc = "transport: Error while dialing only one connection allowed". Reconnecting…” module=grpc Dec 05 12:48:01 development dockerd[5120]: time=“2019-12-05T12:48:01.859546266Z” level=info msg=“parsed scheme: ""” module=grpc Dec 05 12:48:01 development dockerd[5120]: time=“2019-12-05T12:48:01.859593465Z” level=info msg=“scheme "" not registered, fallback to default scheme” module=grpc Dec 05 12:48:01 development dockerd[5120]: time=“2019-12-05T12:48:01.859613747Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{ 0 <nil>}] <nil>}” module=grpc Dec 05 12:48:01 development dockerd[5120]: time=“2019-12-05T12:48:01.859622269Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Dec 05 12:49:00 development dockerd[5120]: time=“2019-12-05T12:49:00.409104917Z” level=info msg=“shim containerd-shim started” address=“/containerd-shim/moby/8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7/shim.sock” debug=false pid=12593 Dec 05 12:54:05 development dockerd[5120]: time=“2019-12-05T12:54:05.182825748Z” level=info msg=“shim reaped” id=8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7 Dec 05 12:54:05 development dockerd[5120]: time=“2019-12-05T12:54:05.192955477Z” level=info msg=“ignoring event” module=libcontainerd namespace=moby topic=/tasks/delete type=“*events.TaskDelete” Dec 05 12:54:05 development dockerd[5120]: time=“2019-12-05T12:54:05.258957007Z” level=warning msg=“8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/8bd7e3fbeb53f64ca862be5663dd0f8beeb52abb0906e9bb6f6f413fbd645cc7/mounts/shm, flags: 0x2: no such file or directory”

==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 7684a0b5cb3b5 4689081edb103 8 minutes ago Running storage-provisioner 2 870d789b8d1f4 5cdd5a2f2515a c2a449c9f8344 8 minutes ago Running registry 1 e8992e1c5c8dd 94a394904f6c8 60dc18151daf8 8 minutes ago Running registry-proxy 1 2f825d31868f2 ae491c71db38f 70f311871ae12 8 minutes ago Running coredns 1 11a1452871673 91aee06c8fbf3 70f311871ae12 8 minutes ago Running coredns 1 1ce7ab3aa409a edbb424ad0fc6 22243b9b56e72 8 minutes ago Running kube-proxy 1 c7fbb0634b303 d68791efff234 4689081edb103 8 minutes ago Exited storage-provisioner 1 870d789b8d1f4 c36f5378a1b87 82be4be24fb6b 8 minutes ago Running kube-scheduler 1 a1c237c0ccd0c 2635c7184c31b f691d6df3b823 8 minutes ago Running kube-controller-manager 1 0a9fc7d0f1751 8ae6092946dfa 44ebc8208c5da 8 minutes ago Running kube-apiserver 1 ed6e159fe6128 47682076abebd 303ce5db0e90d 8 minutes ago Running etcd 1 c4769f241eaac 0f1f51e87243e bd12a212f9dcb 8 minutes ago Running kube-addon-manager 1 85ca1c3578115 6c5b63693fcb0 70f311871ae12 About an hour ago Exited coredns 0 35d2a57173a45 defc35ec54189 gcr.io/google_containers/kube-registry-proxy@sha256:1040f25a5273de0d72c54865a8efd47e3292de9fb8e5353e3fa76736b854f2da 20 hours ago Exited registry-proxy 0 3b946debea9fe 462eab8d11fda registry.hub.docker.com/library/registry@sha256:5eaafa2318aa0c4c52f95077c2a68bed0b13f6d2b464835723d4de1484052299 20 hours ago Exited registry 0 30474e27dbe27 8a6f55576b5b6 70f311871ae12 20 hours ago Exited coredns 0 613287d5fd25f c4dff44ab5195 22243b9b56e72 20 hours ago Exited kube-proxy 0 a9baf46498339 bdb0b95bd3266 k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 20 hours ago Exited kube-addon-manager 0 5491c816137c7 21128a5c51562 f691d6df3b823 20 hours ago Exited kube-controller-manager 0 77aa0eaf4d814 52476de3d24ed 82be4be24fb6b 20 hours ago Exited kube-scheduler 0 80ea25f3239f0 7d01a6ad09d12 303ce5db0e90d 20 hours ago Exited etcd 0 36cb23998f749 755c9793968a5 44ebc8208c5da 20 hours ago Exited kube-apiserver 0 c3025018c3ebf

==> coredns [“6c5b63693fcb”] <== .:53 E1205 12:43:40.091813 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1[44NFO3]/ applu/vi1n/srevlioces?resourceVernsiognu=35t6i&oimeout=D65m = se&2t3i5feccu36Seconds=405&watch=t6r966: di7a6816ccpd 903.4e6bc07. 1:443: connect: coCoreDNS-1r.6f.5s ed linux1/a0md614,: go:1.1.3.41, 5c2 fd 1 1 2r eflec1t205 1g2::43:3]. 0pk8g13m o d / k 8 s1. iroe/fclleicetnotr-.go:283] p-kg/mod6/k8s.5io0/cl7i8ed2-agfo7@9v0.a0b.0-o20l1s/6c2008e51r01-l78cd2oaf.79o2b9ab/ toaols/cache/arecfl ecvt1.Naorespacoe::9 G:e F aileds to 1wa.t9ch 0v.11.:E4n4d3popiit/sv:1 /Gneam ehstptpses/?1r0e.s9o6u.rc.e1V:e4r4s3i/ pci/nn/ecntdipooni nrtesf?urseesdo urceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091813 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091813 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=7m18s&timeoutSeconds=438&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091914 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=6m45s&timeoutSeconds=405&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091958 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091958 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091958 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=6m43s&timeoutSeconds=403&watch=true: dial tcp 10.96.0.1:443: connect: connection refused [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s

==> coredns [“8a6f55576b5b”] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 E1205 12:43:40.091271 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m2Cs&tioeeDNS-1.6.5s=509&watch=true: dial tcp 10.9l.n0x.a1m:d64, goo.n3.4c tc:2fdonnectiobn2 refused E120E511025: 423::443.40.09112279 0 1 1rerflelctcoro.go🅾️32 p3kg /mkdg/kmosd./ko8/clie/ntlioe@nvt0-0.o0@2001900.6-20008951602-78d521f712bab8d2oaofls/2acaheb/rtofolsc/tache/roeflectoor.9go 98a: lFai lted tot cwha tch *ve1.Nameespaece:hGtettp sht/tp1s:.//1.0…96.043:/a4p3/apiuv1/na=7m1acss&?triemsoorcetVSeercsoindns==14281&timeouwatmh=4tr&ut:idieao ttcS 1c0.n6d0.=143444&3:a cohnetcr:u c odinalc iocnpr f1u0.e9d6 .0.1:443: 1c05n1n2:4t::0.09n12e1c tio n 1 resledctor.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091271 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=356&timeout=7m1s&timeoutSeconds=421&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091314 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=79681&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091290 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091290 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 10.96.0.1:443: connect: connection refused E1205 12:43:40.091290 1 reflector.go:283] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to watch *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?resourceVersion=148&timeout=7m14s&timeoutSeconds=434&watch=true: dial tcp 10.96.0.1:443: connect: connection refused [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/health: Going into lameduck mode for 5s

==> coredns [“91aee06c8fbf”] <== .:53 E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, got1c.1 1…4,6 c0.1f:d443: i/bo timeout NFO] plEu12in/r1e2:dy5: 53.il17 1w4a t i n g onre lekctberrnget:e1s5] pk[/INoFO/]k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/ pllusg/can/hree/arey: lSetitor wgoit98n:g Fan:led" ktu lirtnete1.S" rvi[eI:N FOet httlpusg:/n/10ea96y:0.S:443/alli /wv1i/sierg iocne ?"lkimie=rn00trees"o urce12e0rs io2n:405:53i.a516 5tc9p 1 . 6.0.1r4ac3e. go:o 2]imeruat e[2115581934]: “Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch” (started: 2019-12-05 12:45:23.512129852 +0000 UTC m=+0.147295767) (total time: 30.004389118s): Trace[2115581934]: [30.004389118s] [30.004389118s] END E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516658 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1205 12:45:53.516594 1 trace.go:82] Trace[51800909]: “Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch” (started: 2019-12-05 12:45:23.512079575 +0000 UTC m=+0.147245467) (total time: 30.004349923s): Trace[51800909]: [30.004349923s] [30.004349923s] END E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516748 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1205 12:45:53.516852 1 trace.go:82] Trace[1524753788]: “Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch” (started: 2019-12-05 12:45:23.512082897 +0000 UTC m=+0.147248804) (total time: 30.004703112s): Trace[1524753788]: [30.004703112s] [30.004703112s] END E1205 12:45:53.517014 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.517014 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.517014 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> coredns [“ae491c71db38”] <== E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https.//10.96.0.1:443:ap5/v31/endpoints?limit=500&r[sIourceVerpilu=0g: nia/l tlpoa0.:96.0.1:4i4n: ico tnimeoutration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.5 linux/amd64, go1.13.4, c2fd1b2 [INFO] plugin/ready: Still waiting on: “kubernetes” [INFO] plugin/ready: Still waiting on: “kubernetes” [INFO] plugin/ready: Still waiting on: “kubernetes” I1205 12:45:53.516577 1 trace.go:82] Trace[180589227]: “Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch” (started: 2019-12-05 12:45:23.512115115 +0000 UTC m=+0.071956340) (total time: 30.004223473s): Trace[180589227]: [30.004223473s] [30.004223473s] END E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516639 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1205 12:45:53.516610 1 trace.go:82] Trace[280936051]: “Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch” (started: 2019-12-05 12:45:23.512106664 +0000 UTC m=+0.071947880) (total time: 30.004463073s): Trace[280936051]: [30.004463073s] [30.004463073s] END E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516664 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I1205 12:45:53.516711 1 trace.go:82] Trace[2139292546]: “Reflector pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98 ListAndWatch” (started: 2019-12-05 12:45:23.512518628 +0000 UTC m=+0.072359863) (total time: 30.004161325s): Trace[2139292546]: [30.004161325s] [30.004161325s] END E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E1205 12:45:53.516735 1 reflector.go:125] pkg/mod/k8s.io/client-go@v0.0.0-20190620085101-78d2af792bab/tools/cache/reflector.go:98: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout

==> dmesg <== [Dec 5 12:44] You have booted with nomodeset. This means your GPU drivers are DISABLED [ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly [ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it [ +0.638183] core: CPUID marked event: ‘cpu cycles’ unavailable [ +0.000000] core: CPUID marked event: ‘instructions’ unavailable [ +0.000001] core: CPUID marked event: ‘bus cycles’ unavailable [ +0.000000] core: CPUID marked event: ‘cache references’ unavailable [ +0.000001] core: CPUID marked event: ‘cache misses’ unavailable [ +0.000000] core: CPUID marked event: ‘branch instructions’ unavailable [ +0.000001] core: CPUID marked event: ‘branch misses’ unavailable [ +0.005076] #2 [ +0.001420] #3 [ +0.001859] #4 [ +0.001268] #5 [ +0.002171] #6 [ +0.002357] #7 [ +0.004135] #9 [ +0.001785] #10 [ +0.001869] #11 [ +0.012590] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override. [ +20.235244] sd 0:0:0:0: [sda] Assuming drive cache: write through [ +0.112281] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2 [ +5.629072] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument [ +0.015870] systemd-fstab-generator[2792]: Ignoring “noauto” for root device [ +0.002596] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling. [ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.) [ +0.829825] vboxguest: loading out-of-tree module taints kernel. [ +0.003089] vboxguest: PCI device not found, probably running on physical hardware. [ +0.285949] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack. [ +8.612075] systemd-fstab-generator[5018]: Ignoring “noauto” for root device [ +10.585496] systemd-fstab-generator[5641]: Ignoring “noauto” for root device [Dec 5 12:45] kauditd_printk_skb: 110 callbacks suppressed [ +6.682240] kauditd_printk_skb: 29 callbacks suppressed [ +15.792288] kauditd_printk_skb: 59 callbacks suppressed [ +7.277080] kauditd_printk_skb: 5 callbacks suppressed [Dec 5 12:46] NFSD: Unable to end grace period: -110 [Dec 5 12:47] kauditd_printk_skb: 2 callbacks suppressed

==> kernel <== 12:54:11 up 10 min, 0 users, load average: 1.53, 1.10, 0.55 Linux development 4.19.81 #1 SMP Tue Nov 26 10:19:39 PST 2019 x86_64 GNU/Linux PRETTY_NAME=“Buildroot 2019.02.7”

==> kube-addon-manager [“0f1f51e87243”] <== INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:43+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:44+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:47+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:49+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:52+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:53+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-05T12:53:57+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:53:59+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-05T12:54:02+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:54:03+00:00 == INFO: Leader election disabled. INFO: == Kubernetes addon ensure completed at 2019-12-05T12:54:07+00:00 == INFO: == Reconciling with deprecated label == error: no objects passed to apply INFO: == Reconciling with addon-manager label == daemonset.apps/registry-proxy unchanged replicationcontroller/registry unchanged service/registry unchanged serviceaccount/storage-provisioner unchanged INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:54:09+00:00 ==

==> kube-addon-manager [“bdb0b95bd326”] <== error: no objects passed to apply error when retrieving current configuration of: Resource: “apps/v1, Resource=daemonsets”, GroupVersionKind: “apps/v1, Kind=DaemonSet” Name: “registry-proxy”, Namespace: “kube-system” Object: &{map[“spec”:map[“selector”:map[“matchLabels”:map[“kubernetes.io/minikube-addons”:“registry” “registry-proxy”:“true”]] “template”:map[“metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addonsI:“registryN “FO:re=giReryopncilyn"g: wtrtuhe a]d]o “-smpec”:map[“containers”:[map[“name”:“registry-proxy” “ports”:[map[“containerPort”:‘P’ “hostPort”:‘\u1388’ “name”:“registry”]] “env”:[map[“name”:“REGISTRY_HOST” “value”:“registry.kube-system.svc.cluster.local”] map[“name”:“REGISTRY_PORT” “value”:“80”]] “image”:“gcr.io/google_containers/kube-registry-proxy:0.4” “imagePullPolicy”:“IfNotPresent”]]]]] “apiVersion”:“apps/v1” “kind”:“DaemonSet” “metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”] “name”:“registry-proxy” “namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:””]]]} from server for: “/etc/kubernetes/addons/registry-proxy.yaml”: Get https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/registry-proxy: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=replicationcontrollers”, GroupVersionKind: “/v1, Kind=ReplicationController” Name: “registry”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“ReplicationController” “metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”] “name”:“registry” “namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:”“]] “spec”:map[“replicas”:‘\x01’ “selector”:map[“kubernetes.io/minikube-addons”:“registry”] “template”:map[“metadata”:map[“labels”:map[“actual-registry”:“true” “addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”]] “spec”:map[“containers”:[map[“name”:“registry” “ports”:[map[“containerPort”:‘\u1388’ “protocol”:“TCP”]] “env”:[map[“name”:“REGISTRY_STORAGE_DELETE_ENABLED” “value”:“true”]] “image”:“registry.hub.docker.com/library/registry:2.6.1” “imagePullPolicy”:“IfNotPresent”]]]]]]} from server for: “/etc/kubernetes/addons/registry-rc.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/replicationcontrollers/registry: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=services”, GroupVersionKind: “/v1, Kind=Service” Name: “registry”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“Service” “metadata”:map[“namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:”"] “labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”] “name”:“registry”] “spec”:map[“ports”:[map[“port”:‘P’ “targetPort”:‘\u1388’]] “selector”:map[“actual-registry”:“true” “kubernetes.io/minikube-addons”:“registry”] “type”:“ClusterIP”]]} from server for: “/etc/kubernetes/addons/registry-svc.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/services/registry: dial tcp 127.0.0.1:8443: connect: connection refused INFO: == Kubernetes addon reconcielrror mwplne red attri0e1v9i-n1g -c0urTr1n2t: 43nfig:u4r3at0o0n 0of:

ResourceI F"/v1, Resource=O: Leader election disabled. The connection to the server localhost:8s4e3v iwcesc crountse"d GrodpiVdr sionKsipnd:if/yv 1t hKinr=iegvhicte chcount" or port? Name: “storage-provisioner”, Namespace: “kube-system” Object: &{map[“metadata”:map[“namespace”:“kube-systeFm” “aOn o=a=t iKounse:rmapt"ekubectl.kubernetes.io/last-applied-configuration addon ensure co"m:p"l"et]e d” laatb e2l0s1"9:-m1a2p-[0"5aTd1d2o:n4m3a:n4a8g+e0r0.:k0u0b e=r=n etes.io/mode"I:N"FROe:c o=n=c iRleec"o]nc i"lnianmge “w:i"tsht odreapgree-cpartoevdi slabel == INFO: == Reconciling with addon-manager label == ioner”] “apiVersion”:“v1” “kind”:“ServiceAccount”]} from server for: “/etc/kubernetes/addons/storage-provisioner.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=pods”, GroupVersionKind: “/v1, Kind=Pod” Name: “storage-provisioner”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“Pod” “metadata”:map[“annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:“”] “labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “integration-test”:“storage-provisioner”] “name”:“storage-provisioner” “namespace”:“kube-system”] “spec”:map[“containers”:[map[“command”:[“/storage-provisioner”] “image”:“gcr.io/k8s-minikube/storage-provisioner:v1.8.1” “imagePullPolicy”:“IfNotPresent” “name”:“storage-provisioner” “volumeMounts”:[map[“mountPath”:“/tmp” “name”:“tmp”]]]] “hostNetwork”:%!q(bool=true) “serviceAccountName”:“storage-provisioner” “volumes”:[map[“hostPath”:map[“path”:“/tmp” “type”:“Directory”] “name”:“tmp”]]]]} from server for: “/etc/kubernetes/addons/storage-provisioner.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused error: no objects passed to apply error when retrieving current configuration of: Resource: “apps/v1, Resource=daemonsets”, GroupVersionKind: “apps/v1, Kind=DaemonSet” Name: “registry-proxy”, Namespace: “kube-system” Object: &{map[“kind”:“DaemonSet” “metadata”:map[“annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:“”] “labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”] “name”:“registry-proxy” “namespace”:“kube-system”] “spec”:map[“selector”:map[“matchLabels”:map[“kubernetes.io/minikube-addons”:“registry” “registry-proxy”:“true”]] “template”:map[“metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry” “registry-proxy”:“true”]] “spec”:map[“containers”:[map[“env”:[map[“value”:“registry.kube-system.svc.cluster.local” “name”:“REGISTRY_HOST”] map[“name”:“REGISTRY_PORT” “value”:“80”]] “image”:“gcr.io/google_containers/kube-registry-proxy:0.4” “imagePullPolicy”:“IfNotPresent” “name”:“registry-proxy” “ports”:[map[“hostPort”:‘\u1388’ “name”:“registry” “containerPort”:‘P’]]]]]]] “apiVersion”:“apps/v1”]} from server for: “/etc/kubernetes/addons/registry-proxy.yaml”: Get https://localhost:8443/apis/apps/v1/namespaces/kube-system/daemonsets/registry-proxy: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=replicationcontrollers”, GroupVersionKind: “/v1, Kind=ReplicationController” Name: “registry”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“ReplicationController” “metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”] “name”:“registry” “namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:“”]] “spec”:map[“replicas”:‘\x01’ “selector”:map[“kubernetes.io/minikube-addons”:“registry”] “template”:map[“spec”:map[“containers”:[map[“env”:[map[“value”:“true” “name”:“REGISTRY_STORAGE_DELETE_ENABLED”]] “image”:“registry.hub.docker.com/library/registry:2.6.1” “imagePullPolicy”:“IfNotPresent” “name”:“registry” “ports”:[map[“containerPort”:‘\u1388’ “protocol”:“TCP”]]]]] “metadata”:map[“labels”:map[“actual-registry”:“true” “addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”]]]]]} from server for: “/etc/kubernetes/addons/registry-rc.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/replicationcontrollers/registry: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=services”, GroupVersionKind: “/v1, Kind=Service” Name: “registry”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“Service” “metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “kubernetes.io/minikube-addons”:“registry”] “name”:“registry” “namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:“”]] “spec”:map[“ports”:[map[“targetPort”:‘\u1388’ “port”:‘P’]] “selector”:map[“actual-registry”:“true” “kubernetes.io/minikube-addons”:“registry”] “type”:“ClusterIP”]]} INFO: == Kubernetes addon reconcile completed at 2019-12-05T12:43:48+00:00 == from server for: “/etc/kubernetes/addons/registry-svc.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/services/registry: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=serviceaccounts”, GroupVersionKind: “/v1, Kind=ServiceAccount” Name: “storage-provisioner”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“ServiceAccount” “metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile”] “name”:“storage-provisioner” “namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:“”]]]} from server for: “/etc/kubernetes/addons/storage-provisioner.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/serviceaccounts/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused error when retrieving current configuration of: Resource: “/v1, Resource=pods”, GroupVersionKind: “/v1, Kind=Pod” Name: “storage-provisioner”, Namespace: “kube-system” Object: &{map[“apiVersion”:“v1” “kind”:“Pod” “metadata”:map[“labels”:map[“addonmanager.kubernetes.io/mode”:“Reconcile” “integration-test”:“storage-provisioner”] “name”:“storage-provisioner” “namespace”:“kube-system” “annotations”:map[“kubectl.kubernetes.io/last-applied-configuration”:“”]] “spec”:map[“containers”:[map[“command”:[“/storage-provisioner”] “image”:“gcr.io/k8s-minikube/storage-provisioner:v1.8.1” “imagePullPolicy”:“IfNotPresent” “name”:“storage-provisioner” “volumeMounts”:[map[“mountPath”:“/tmp” “name”:“tmp”]]]] “hostNetwork”:%!q(bool=true) “serviceAccountName”:“storage-provisioner” “volumes”:[map[“hostPath”:map[“path”:“/tmp” “type”:“Directory”] “name”:“tmp”]]]]} from server for: “/etc/kubernetes/addons/storage-provisioner.yaml”: Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-apiserver [“755c9793968a”] <== I1205 12:43:40.083569 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick I1205 12:43:40.083657 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.083660 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.083687 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.083908 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.084018 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.084406 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.084438 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.084440 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.084804 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.084837 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick I1205 12:43:40.084884 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.084918 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.084920 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.085103 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.085158 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.085278 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.085448 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.085498 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.085636 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.085707 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.085796 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.085821 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.085845 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.086082 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.086613 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick I1205 12:43:40.086652 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.086663 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.086668 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.086706 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.086982 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.086990 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.087281 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.087338 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.087538 1 clientconn.go:825] blockingPicker: the picked transport is not ready, loop back to repick W1205 12:43:40.087593 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.087668 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… W1205 12:43:40.087904 1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0 <nil>}. Err :connection error: desc = “transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused”. Reconnecting… I1205 12:43:40.088282 1 nonstructuralschema_controller.go:203] Shutting down NonStructuralSchemaConditionController I1205 12:43:40.088288 1 controller.go:180] Shutting down kubernetes service endpoint reconciler I1205 12:43:40.088307 1 customresource_discovery_controller.go:219] Shutting down DiscoveryController I1205 12:43:40.088315 1 apiapproval_controller.go:197] Shutting down KubernetesAPIApprovalPolicyConformantConditionController I1205 12:43:40.088323 1 establishing_controller.go:84] Shutting down EstablishingController I1205 12:43:40.088338 1 available_controller.go:398] Shutting down AvailableConditionController I1205 12:43:40.088344 1 crd_finalizer.go:275] Shutting down CRDFinalizer I1205 12:43:40.088364 1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller I1205 12:43:40.088285 1 naming_controller.go:299] Shutting down NamingConditionController I1205 12:43:40.088379 1 apiservice_controller.go:106] Shutting down APIServiceRegistrationController I1205 12:43:40.088391 1 crdregistration_controller.go:142] Shutting down crd-autoregister controller I1205 12:43:40.088338 1 autoregister_controller.go:164] Shutting down autoregister controller I1205 12:43:40.088310 1 controller.go:122] Shutting down OpenAPI controller I1205 12:43:40.088482 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt I1205 12:43:40.088489 1 tlsconfig.go:234] Shutting down DynamicServingCertificateController I1205 12:43:40.088482 1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1205 12:43:40.088491 1 dynamic_serving_content.go:144] Shutting down serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I1205 12:43:40.088482 1 dynamic_cafile_content.go:181] Shutting down request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1205 12:43:40.088498 1 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/var/lib/minikube/certs/ca.crt I1205 12:43:40.089606 1 controller.go:87] Shutting down OpenAPI AggregationController I1205 12:43:40.090362 1 secure_serving.go:222] Stopped listening on [::]:8443 E1205 12:43:40.091344 1 controller.go:183] rpc error: code = Unknown desc = OK: HTTP status code 200; transport: missing content-type field

==> kube-apiserver [“8ae6092946df”] <== I1205 12:45:18.675730 1 client.go:361] parsed scheme: “endpoint” I1205 12:45:18.675765 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I1205 12:45:18.681865 1 client.go:361] parsed scheme: “endpoint” I1205 12:45:18.681899 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] W1205 12:45:18.886364 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources. W1205 12:45:18.893017 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W1205 12:45:18.900180 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources. W1205 12:45:18.912908 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W1205 12:45:18.915375 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W1205 12:45:18.929095 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W1205 12:45:18.983377 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources. W1205 12:45:18.983425 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources. I1205 12:45:19.020631 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass. I1205 12:45:19.020667 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota. I1205 12:45:19.023975 1 client.go:361] parsed scheme: “endpoint” I1205 12:45:19.024044 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I1205 12:45:19.031816 1 client.go:361] parsed scheme: “endpoint” I1205 12:45:19.031855 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 <nil>}] I1205 12:45:21.311132 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1205 12:45:21.311134 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1205 12:45:21.311424 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I1205 12:45:21.311717 1 secure_serving.go:178] Serving securely on [::]:8443 I1205 12:45:21.312316 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController I1205 12:45:21.312327 1 establishing_controller.go:73] Starting EstablishingController I1205 12:45:21.312346 1 naming_controller.go:288] Starting NamingConditionController I1205 12:45:21.312383 1 crdregistration_controller.go:111] Starting crd-autoregister controller I1205 12:45:21.312391 1 controller.go:85] Starting OpenAPI controller I1205 12:45:21.312436 1 customresource_discovery_controller.go:208] Starting DiscoveryController I1205 12:45:21.312450 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister I1205 12:45:21.312553 1 controller.go:81] Starting OpenAPI AggregationController I1205 12:45:21.313211 1 tlsconfig.go:219] Starting DynamicServingCertificateController I1205 12:45:21.313260 1 crd_finalizer.go:263] Starting CRDFinalizer I1205 12:45:21.313306 1 autoregister_controller.go:140] Starting autoregister controller I1205 12:45:21.313321 1 cache.go:32] Waiting for caches to sync for autoregister controller I1205 12:45:21.313354 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController I1205 12:45:21.315118 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I1205 12:45:21.315133 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller I1205 12:45:21.315164 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I1205 12:45:21.315184 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I1205 12:45:21.330595 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I1205 12:45:21.331548 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I1205 12:45:21.332714 1 available_controller.go:386] Starting AvailableConditionController I1205 12:45:21.332727 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller E1205 12:45:21.336849 1 controller.go:151] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service I1205 12:45:21.340393 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I1205 12:45:21.412678 1 shared_informer.go:204] Caches are synced for crd-autoregister I1205 12:45:21.413528 1 cache.go:39] Caches are synced for autoregister controller I1205 12:45:21.415291 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller I1205 12:45:21.432119 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I1205 12:45:21.432926 1 cache.go:39] Caches are synced for AvailableConditionController controller I1205 12:45:22.311087 1 controller.go:107] OpenAPI AggregationController: Processing item I1205 12:45:22.311118 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I1205 12:45:22.311128 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I1205 12:45:22.318025 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist. I1205 12:45:23.928936 1 controller.go:606] quota admission added evaluator for: deployments.apps I1205 12:45:23.945314 1 controller.go:606] quota admission added evaluator for: serviceaccounts I1205 12:45:24.006930 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I1205 12:45:24.019860 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I1205 12:45:24.026034 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I1205 12:45:37.236863 1 controller.go:606] quota admission added evaluator for: endpoints

==> kube-controller-manager [“21128a5c5156”] <== I1204 16:41:13.589387 1 event.go:281] Event(v1.ObjectReference{Kind:“DaemonSet”, Namespace:“kube-system”, Name:“registry-proxy”, UID:“0a8f990e-598f-4c14-9af3-d9b8bdf0f365”, APIVersion:“apps/v1”, ResourceVersion:“343”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: registry-proxy-jvwpv I1204 16:41:13.605270 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicationController”, Namespace:“kube-system”, Name:“registry”, UID:“d0726384-f4fc-487e-a006-195e0f570b0e”, APIVersion:“v1”, ResourceVersion:“350”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: registry-74q4h I1205 11:47:46.892446 1 event.go:281] Event(v1.ObjectReference{Kind:“Deployment”, Namespace:“kube-system”, Name:“coredns”, UID:“05de388e-30e2-4c4a-bdc6-ebe812ba21f0”, APIVersion:“apps/v1”, ResourceVersion:“72233”, FieldPath:“”}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set coredns-7f85fdfc6b to 1 I1205 11:47:46.896172 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicaSet”, Namespace:“kube-system”, Name:“coredns-7f85fdfc6b”, UID:“8e3c5ef5-5c4b-472c-b6a8-3218b9609425”, APIVersion:“apps/v1”, ResourceVersion:“72234”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: coredns-7f85fdfc6b-mv5gf I1205 11:47:46.900156 1 event.go:281] Event(v1.ObjectReference{Kind:“Deployment”, Namespace:“kube-system”, Name:“coredns”, UID:“05de388e-30e2-4c4a-bdc6-ebe812ba21f0”, APIVersion:“apps/v1”, ResourceVersion:“72233”, FieldPath:“”}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled down replica set coredns-6955765f44 to 1 I1205 11:47:46.905346 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicaSet”, Namespace:“kube-system”, Name:“coredns-6955765f44”, UID:“a717e31b-5a76-4003-b4dc-6028d7a5505b”, APIVersion:“apps/v1”, ResourceVersion:“72240”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulDelete’ Deleted pod: coredns-6955765f44-xcgcm I1205 11:47:46.912757 1 event.go:281] Event(v1.ObjectReference{Kind:“Deployment”, Namespace:“kube-system”, Name:“coredns”, UID:“05de388e-30e2-4c4a-bdc6-ebe812ba21f0”, APIVersion:“apps/v1”, ResourceVersion:“72235”, FieldPath:“”}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set coredns-7f85fdfc6b to 2 I1205 11:47:46.916226 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicaSet”, Namespace:“kube-system”, Name:“coredns-7f85fdfc6b”, UID:“8e3c5ef5-5c4b-472c-b6a8-3218b9609425”, APIVersion:“apps/v1”, ResourceVersion:“72251”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: coredns-7f85fdfc6b-pltmx I1205 11:47:46.947335 1 event.go:281] Event(v1.ObjectReference{Kind:“Deployment”, Namespace:“kube-system”, Name:“coredns”, UID:“05de388e-30e2-4c4a-bdc6-ebe812ba21f0”, APIVersion:“apps/v1”, ResourceVersion:“72262”, FieldPath:“”}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled down replica set coredns-7f85fdfc6b to 0 I1205 11:47:46.951932 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicaSet”, Namespace:“kube-system”, Name:“coredns-7f85fdfc6b”, UID:“8e3c5ef5-5c4b-472c-b6a8-3218b9609425”, APIVersion:“apps/v1”, ResourceVersion:“72265”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulDelete’ Deleted pod: coredns-7f85fdfc6b-mv5gf I1205 11:47:46.954629 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicaSet”, Namespace:“kube-system”, Name:“coredns-7f85fdfc6b”, UID:“8e3c5ef5-5c4b-472c-b6a8-3218b9609425”, APIVersion:“apps/v1”, ResourceVersion:“72265”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulDelete’ Deleted pod: coredns-7f85fdfc6b-pltmx I1205 11:47:46.954738 1 event.go:281] Event(v1.ObjectReference{Kind:“Deployment”, Namespace:“kube-system”, Name:“coredns”, UID:“05de388e-30e2-4c4a-bdc6-ebe812ba21f0”, APIVersion:“apps/v1”, ResourceVersion:“72264”, FieldPath:“”}): type: ‘Normal’ reason: ‘ScalingReplicaSet’ Scaled up replica set coredns-6955765f44 to 2 I1205 11:47:46.959104 1 event.go:281] Event(v1.ObjectReference{Kind:“ReplicaSet”, Namespace:“kube-system”, Name:“coredns-6955765f44”, UID:“a717e31b-5a76-4003-b4dc-6028d7a5505b”, APIVersion:“apps/v1”, ResourceVersion:“72270”, FieldPath:“”}): type: ‘Normal’ reason: ‘SuccessfulCreate’ Created pod: coredns-6955765f44-d4g9g I1205 11:48:05.770097 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I1205 11:48:15.772601 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode. E1205 12:43:40.089224 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Deployment: Get https://localhost:8443/apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=72382&timeout=7m36s&timeoutSeconds=456&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ControllerRevision: Get https://localhost:8443/apis/apps/v1/controllerrevisions?allowWatchBookmarks=true&resourceVersion=344&timeout=8m51s&timeoutSeconds=531&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodSecurityPolicy: Get https://localhost:8443/apis/policy/v1beta1/podsecuritypolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=5m53s&timeoutSeconds=353&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090285 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:8443/apis/extensions/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Event: Get https://localhost:8443/apis/events.k8s.io/v1beta1/events?allowWatchBookmarks=true&resourceVersion=78580&timeout=8m12s&timeoutSeconds=492&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090294 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.Ingress: Get https://localhost:8443/apis/networking.k8s.io/v1beta1/ingresses?allowWatchBookmarks=true&resourceVersion=1&timeout=9m58s&timeoutSeconds=598&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090288 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?allowWatchBookmarks=true&resourceVersion=1&timeout=8m46s&timeoutSeconds=526&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ServiceAccount: Get https://localhost:8443/api/v1/serviceaccounts?allowWatchBookmarks=true&resourceVersion=362&timeout=5m9s&timeoutSeconds=309&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090230 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.LimitRange: Get https://localhost:8443/api/v1/limitranges?allowWatchBookmarks=true&resourceVersion=1&timeout=6m55s&timeoutSeconds=415&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090244 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.HorizontalPodAutoscaler: Get https://localhost:8443/apis/autoscaling/v1/horizontalpodautoscalers?allowWatchBookmarks=true&resourceVersion=1&timeout=6m41s&timeoutSeconds=401&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090295 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CronJob: Get https://localhost:8443/apis/batch/v1beta1/cronjobs?allowWatchBookmarks=true&resourceVersion=1&timeout=5m6s&timeoutSeconds=306&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090338 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.EndpointSlice: Get https://localhost:8443/apis/discovery.k8s.io/v1beta1/endpointslices?allowWatchBookmarks=true&resourceVersion=1&timeout=9m2s&timeoutSeconds=542&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090337 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=79681&timeout=6m12s&timeoutSeconds=372&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090338 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=467&timeout=5m26s&timeoutSeconds=326&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090343 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=332&timeout=9m6s&timeoutSeconds=546&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090392 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m33s&timeoutSeconds=513&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090406 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PriorityClass: Get https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses?allowWatchBookmarks=true&resourceVersion=45&timeout=8m13s&timeoutSeconds=493&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090603 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m18s&timeoutSeconds=558&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090646 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.NetworkPolicy: Get https://localhost:8443/apis/networking.k8s.io/v1/networkpolicies?allowWatchBookmarks=true&resourceVersion=1&timeout=8m44s&timeoutSeconds=524&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090683 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ConfigMap: Get https://localhost:8443/api/v1/configmaps?allowWatchBookmarks=true&resourceVersion=72232&timeout=7m3s&timeoutSeconds=423&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090724 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=5&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090741 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&resourceVersion=72390&timeout=8m7s&timeoutSeconds=487&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090759 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.VolumeAttachment: Get https://localhost:8443/apis/storage.k8s.io/v1/volumeattachments?allowWatchBookmarks=true&resourceVersion=1&timeout=9m38s&timeoutSeconds=578&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090767 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=5m18s&timeoutSeconds=318&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090786 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Job: Get https://localhost:8443/apis/batch/v1/jobs?allowWatchBookmarks=true&resourceVersion=1&timeout=9m29s&timeoutSeconds=569&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090792 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Namespace: Get https://localhost:8443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=148&timeout=5m59s&timeoutSeconds=359&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090810 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.DaemonSet: Get https://localhost:8443/apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=72378&timeout=7m56s&timeoutSeconds=476&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090822 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRole: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles?allowWatchBookmarks=true&resourceVersion=329&timeout=5m13s&timeoutSeconds=313&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090839 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ClusterRoleBinding: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?allowWatchBookmarks=true&resourceVersion=331&timeout=8m56s&timeoutSeconds=536&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090845 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Role: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/roles?allowWatchBookmarks=true&resourceVersion=188&timeout=9m5s&timeoutSeconds=545&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090847 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.RoleBinding: Get https://localhost:8443/apis/rbac.authorization.k8s.io/v1/rolebindings?allowWatchBookmarks=true&resourceVersion=189&timeout=5m36s&timeoutSeconds=336&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090860 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m41s&timeoutSeconds=581&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090883 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ValidatingWebhookConfiguration: Get https://localhost:8443/apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=6m56s&timeoutSeconds=416&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090921 1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:8443/apis/apiextensions.k8s.io/v1/customresourcedefinitions?allowWatchBookmarks=true&resourceVersion=1&timeout=6m35s&timeoutSeconds=395&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090953 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=356&timeout=5m31s&timeoutSeconds=331&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090966 1 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: Get https://localhost:8443/apis/apiregistration.k8s.io/v1/apiservices?allowWatchBookmarks=true&resourceVersion=42&timeout=9m30s&timeoutSeconds=570&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090990 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CertificateSigningRequest: Get https://localhost:8443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests?allowWatchBookmarks=true&resourceVersion=1&timeout=8m52s&timeoutSeconds=532&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091001 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=79415&timeout=9m9s&timeoutSeconds=549&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091005 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PodTemplate: Get https://localhost:8443/api/v1/podtemplates?allowWatchBookmarks=true&resourceVersion=1&timeout=8m8s&timeoutSeconds=488&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091022 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=72381&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091022 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.MutatingWebhookConfiguration: Get https://localhost:8443/apis/admissionregistration.k8s.io/v1/mutatingwebhookconfigurations?allowWatchBookmarks=true&resourceVersion=1&timeout=9m35s&timeoutSeconds=575&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091024 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ResourceQuota: Get https://localhost:8443/api/v1/resourcequotas?allowWatchBookmarks=true&resourceVersion=1&timeout=8m21s&timeoutSeconds=501&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091041 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?allowWatchBookmarks=true&resourceVersion=1&timeout=7m43s&timeoutSeconds=463&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091298 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Lease: Get https://localhost:8443/apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=79683&timeout=6m31s&timeoutSeconds=391&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091346 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Secret: Get https://localhost:8443/api/v1/secrets?allowWatchBookmarks=true&resourceVersion=361&timeout=5m37s&timeoutSeconds=337&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-controller-manager [“2635c7184c31”] <== I1205 12:45:40.358263 1 shared_informer.go:197] Waiting for caches to sync for PV protection I1205 12:45:40.508344 1 controllermanager.go:533] Started “endpoint” I1205 12:45:40.508401 1 endpoints_controller.go:181] Starting endpoint controller I1205 12:45:40.508407 1 shared_informer.go:197] Waiting for caches to sync for endpoint I1205 12:45:40.658281 1 controllermanager.go:533] Started “tokencleaner” I1205 12:45:40.658366 1 tokencleaner.go:117] Starting token cleaner controller I1205 12:45:40.658376 1 shared_informer.go:197] Waiting for caches to sync for token_cleaner I1205 12:45:40.658382 1 shared_informer.go:204] Caches are synced for token_cleaner I1205 12:45:40.810934 1 controllermanager.go:533] Started “clusterrole-aggregation” I1205 12:45:40.811037 1 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator I1205 12:45:40.811046 1 shared_informer.go:197] Waiting for caches to sync for ClusterRoleAggregator I1205 12:45:40.959686 1 controllermanager.go:533] Started “podgc” I1205 12:45:40.960005 1 gc_controller.go:88] Starting GC controller I1205 12:45:40.960027 1 shared_informer.go:197] Waiting for caches to sync for GC I1205 12:45:41.108521 1 controllermanager.go:533] Started “csrapproving” W1205 12:45:41.108565 1 controllermanager.go:525] Skipping “nodeipam” I1205 12:45:41.108578 1 core.go:242] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true. W1205 12:45:41.108582 1 controllermanager.go:525] Skipping “route” I1205 12:45:41.108546 1 certificate_controller.go:118] Starting certificate controller “csrapproving” I1205 12:45:41.108613 1 shared_informer.go:197] Waiting for caches to sync for certificate-csrapproving I1205 12:45:41.259198 1 controllermanager.go:533] Started “attachdetach” I1205 12:45:41.259384 1 shared_informer.go:197] Waiting for caches to sync for resource quota I1205 12:45:41.259602 1 attach_detach_controller.go:342] Starting attach detach controller I1205 12:45:41.259608 1 shared_informer.go:197] Waiting for caches to sync for attach detach W1205 12:45:41.267497 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=“minikube” does not exist I1205 12:45:41.308958 1 shared_informer.go:204] Caches are synced for HPA I1205 12:45:41.310055 1 shared_informer.go:204] Caches are synced for PVC protection I1205 12:45:41.311207 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator I1205 12:45:41.311266 1 shared_informer.go:204] Caches are synced for taint I1205 12:45:41.311439 1 taint_manager.go:186] Starting NoExecuteTaintManager I1205 12:45:41.311543 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: W1205 12:45:41.311776 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp. I1205 12:45:41.311833 1 event.go:281] Event(v1.ObjectReference{Kind:“Node”, Namespace:“”, Name:“minikube”, UID:“fea930d1-a27f-45ed-93f6-fc868f9a6484”, APIVersion:“”, ResourceVersion:“”, FieldPath:“”}): type: ‘Normal’ reason: ‘RegisteredNode’ Node minikube event: Registered Node minikube in Controller I1205 12:45:41.311932 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal. I1205 12:45:41.329820 1 shared_informer.go:204] Caches are synced for expand I1205 12:45:41.333697 1 shared_informer.go:204] Caches are synced for namespace I1205 12:45:41.336757 1 shared_informer.go:204] Caches are synced for job I1205 12:45:41.339570 1 shared_informer.go:204] Caches are synced for ReplicaSet I1205 12:45:41.347754 1 shared_informer.go:204] Caches are synced for TTL I1205 12:45:41.348208 1 shared_informer.go:204] Caches are synced for service account I1205 12:45:41.356994 1 shared_informer.go:204] Caches are synced for daemon sets I1205 12:45:41.357005 1 shared_informer.go:204] Caches are synced for ReplicationController I1205 12:45:41.358500 1 shared_informer.go:204] Caches are synced for PV protection I1205 12:45:41.358770 1 shared_informer.go:204] Caches are synced for persistent volume I1205 12:45:41.359040 1 shared_informer.go:204] Caches are synced for bootstrap_signer I1205 12:45:41.359835 1 shared_informer.go:204] Caches are synced for attach detach I1205 12:45:41.360246 1 shared_informer.go:204] Caches are synced for GC I1205 12:45:41.363838 1 shared_informer.go:204] Caches are synced for deployment I1205 12:45:41.408593 1 shared_informer.go:204] Caches are synced for endpoint I1205 12:45:41.459086 1 shared_informer.go:197] Waiting for caches to sync for garbage collector I1205 12:45:41.473585 1 shared_informer.go:204] Caches are synced for certificate-csrsigning I1205 12:45:41.508830 1 shared_informer.go:204] Caches are synced for certificate-csrapproving I1205 12:45:41.523252 1 shared_informer.go:204] Caches are synced for stateful set I1205 12:45:41.708624 1 shared_informer.go:204] Caches are synced for disruption I1205 12:45:41.708655 1 disruption.go:338] Sending events to api server. I1205 12:45:41.718886 1 shared_informer.go:204] Caches are synced for resource quota I1205 12:45:41.759707 1 shared_informer.go:204] Caches are synced for resource quota I1205 12:45:41.859424 1 shared_informer.go:204] Caches are synced for garbage collector I1205 12:45:41.863887 1 shared_informer.go:204] Caches are synced for garbage collector I1205 12:45:41.863920 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage

==> kube-proxy [“c4dff44ab519”] <== W1204 16:41:26.010957 1 server_others.go:323] Unknown proxy mode “”, assuming iptables proxy I1204 16:41:26.019684 1 node.go:135] Successfully retrieved node IP: 192.168.46.142 I1204 16:41:26.019728 1 server_others.go:145] Using iptables Proxier. W1204 16:41:26.019810 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic I1204 16:41:26.020144 1 server.go:571] Version: v1.17.0-rc.1 I1204 16:41:26.020676 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_max’ to 393216 I1204 16:41:26.020750 1 conntrack.go:52] Setting nf_conntrack_max to 393216 I1204 16:41:26.021665 1 conntrack.go:83] Setting conntrack hashsize to 98304 I1204 16:41:26.036319 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_established’ to 86400 I1204 16:41:26.036463 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_close_wait’ to 3600 I1204 16:41:26.036661 1 config.go:131] Starting endpoints config controller I1204 16:41:26.036699 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I1204 16:41:26.036727 1 config.go:313] Starting service config controller I1204 16:41:26.036739 1 shared_informer.go:197] Waiting for caches to sync for service config I1204 16:41:26.136868 1 shared_informer.go:204] Caches are synced for service config I1204 16:41:26.137453 1 shared_informer.go:204] Caches are synced for endpoints config E1205 12:43:40.092229 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=!service.kubernetes.io%2Fheadless%2C!service.kubernetes.io%2Fservice-proxy-name&resourceVersion=356&timeout=6m13s&timeoutSeconds=373&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.092776 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=!service.kubernetes.io%2Fheadless%2C!service.kubernetes.io%2Fservice-proxy-name&resourceVersion=79681&timeout=6m49s&timeoutSeconds=409&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-proxy [“edbb424ad0fc”] <== W1205 12:45:23.495549 1 server_others.go:323] Unknown proxy mode “”, assuming iptables proxy I1205 12:45:23.518591 1 node.go:135] Successfully retrieved node IP: 192.168.46.142 I1205 12:45:23.518634 1 server_others.go:145] Using iptables Proxier. W1205 12:45:23.518768 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic I1205 12:45:23.518988 1 server.go:571] Version: v1.17.0-rc.1 I1205 12:45:23.520778 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_max’ to 393216 I1205 12:45:23.520816 1 conntrack.go:52] Setting nf_conntrack_max to 393216 I1205 12:45:23.521137 1 conntrack.go:83] Setting conntrack hashsize to 98304 I1205 12:45:23.529958 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_established’ to 86400 I1205 12:45:23.530008 1 conntrack.go:100] Set sysctl ‘net/netfilter/nf_conntrack_tcp_timeout_close_wait’ to 3600 I1205 12:45:23.530952 1 config.go:313] Starting service config controller I1205 12:45:23.531026 1 shared_informer.go:197] Waiting for caches to sync for service config I1205 12:45:23.531126 1 config.go:131] Starting endpoints config controller I1205 12:45:23.531156 1 shared_informer.go:197] Waiting for caches to sync for endpoints config I1205 12:45:23.631521 1 shared_informer.go:204] Caches are synced for endpoints config I1205 12:45:23.632298 1 shared_informer.go:204] Caches are synced for service config

==> kube-scheduler [“52476de3d24e”] <== I1204 16:40:58.172653 1 serving.go:312] Generated self-signed cert in-memory W1204 16:40:58.753688 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: “client-ca::kube-system::extension-apiserver-authentication::client-ca-file” due to: configmap “extension-apiserver-authentication” not found W1204 16:40:58.753878 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: “client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file” due to: configmap “extension-apiserver-authentication” not found W1204 16:41:01.787130 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by ‘kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA’ W1204 16:41:01.787159 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps “extension-apiserver-authentication” is forbidden: User “system:kube-scheduler” cannot get resource “configmaps” in API group “” in the namespace “kube-system” W1204 16:41:01.787167 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous. W1204 16:41:01.787172 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false W1204 16:41:01.802768 1 authorization.go:47] Authorization is disabled W1204 16:41:01.802794 1 authentication.go:92] Authentication is disabled I1204 16:41:01.802807 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1204 16:41:01.804589 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1204 16:41:01.804615 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1204 16:41:01.804936 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I1204 16:41:01.805012 1 tlsconfig.go:219] Starting DynamicServingCertificateController E1204 16:41:01.810624 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps “extension-apiserver-authentication” is forbidden: User “system:kube-scheduler” cannot list resource “configmaps” in API group “” in the namespace “kube-system” E1204 16:41:01.820668 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User “system:kube-scheduler” cannot list resource “services” in API group “” at the cluster scope E1204 16:41:01.823634 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User “system:kube-scheduler” cannot list resource “statefulsets” in API group “apps” at the cluster scope E1204 16:41:01.823687 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User “system:kube-scheduler” cannot list resource “persistentvolumes” in API group “” at the cluster scope E1204 16:41:01.823748 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User “system:kube-scheduler” cannot list resource “persistentvolumeclaims” in API group “” at the cluster scope E1204 16:41:01.823776 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User “system:kube-scheduler” cannot list resource “replicationcontrollers” in API group “” at the cluster scope E1204 16:41:01.823800 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User “system:kube-scheduler” cannot list resource “poddisruptionbudgets” in API group “policy” at the cluster scope E1204 16:41:01.823881 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list resource “storageclasses” in API group “storage.k8s.io” at the cluster scope E1204 16:41:01.823858 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User “system:kube-scheduler” cannot list resource “replicasets” in API group “apps” at the cluster scope E1204 16:41:01.824924 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User “system:kube-scheduler” cannot list resource “nodes” in API group “” at the cluster scope E1204 16:41:01.825696 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User “system:kube-scheduler” cannot list resource “pods” in API group “” at the cluster scope E1204 16:41:01.827113 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list resource “csinodes” in API group “storage.k8s.io” at the cluster scope E1204 16:41:02.811995 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps “extension-apiserver-authentication” is forbidden: User “system:kube-scheduler” cannot list resource “configmaps” in API group “” in the namespace “kube-system” E1204 16:41:02.821900 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User “system:kube-scheduler” cannot list resource “services” in API group “” at the cluster scope E1204 16:41:02.825134 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User “system:kube-scheduler” cannot list resource “statefulsets” in API group “apps” at the cluster scope E1204 16:41:02.826814 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User “system:kube-scheduler” cannot list resource “persistentvolumes” in API group “” at the cluster scope E1204 16:41:02.828002 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User “system:kube-scheduler” cannot list resource “pods” in API group “” at the cluster scope E1204 16:41:02.828922 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User “system:kube-scheduler” cannot list resource “persistentvolumeclaims” in API group “” at the cluster scope E1204 16:41:02.830590 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User “system:kube-scheduler” cannot list resource “replicationcontrollers” in API group “” at the cluster scope E1204 16:41:02.831457 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User “system:kube-scheduler” cannot list resource “poddisruptionbudgets” in API group “policy” at the cluster scope E1204 16:41:02.832560 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list resource “storageclasses” in API group “storage.k8s.io” at the cluster scope E1204 16:41:02.834096 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User “system:kube-scheduler” cannot list resource “replicasets” in API group “apps” at the cluster scope E1204 16:41:02.835094 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User “system:kube-scheduler” cannot list resource “nodes” in API group “” at the cluster scope E1204 16:41:02.836442 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User “system:kube-scheduler” cannot list resource “csinodes” in API group “storage.k8s.io” at the cluster scope I1204 16:41:03.905376 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1204 16:41:03.906013 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler… I1204 16:41:03.914190 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler E1204 16:41:12.708677 1 factory.go:494] pod is already present in the activeQ E1205 12:43:40.090087 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolumeClaim: Get https://localhost:8443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=8m29s&timeoutSeconds=509&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090226 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicationController: Get https://localhost:8443/api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=467&timeout=7m59s&timeoutSeconds=479&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090262 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StatefulSet: Get https://localhost:8443/apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=1&timeout=6m14s&timeoutSeconds=374&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090339 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.PersistentVolume: Get https://localhost:8443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=8m16s&timeoutSeconds=496&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090418 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?allowWatchBookmarks=true&resourceVersion=356&timeout=7m20s&timeoutSeconds=440&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090452 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.CSINode: Get https://localhost:8443/apis/storage.k8s.io/v1/csinodes?allowWatchBookmarks=true&resourceVersion=5&timeout=8m42s&timeoutSeconds=522&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090508 1 reflector.go:320] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to watch *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&fieldSelector=metadata.name%3Dextension-apiserver-authentication&resourceVersion=72232&timeout=5m35s&timeoutSeconds=335&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090662 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.StorageClass: Get https://localhost:8443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=332&timeout=8m25s&timeoutSeconds=505&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090743 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Node: Get https://localhost:8443/api/v1/nodes?allowWatchBookmarks=true&resourceVersion=79415&timeout=5m13s&timeoutSeconds=313&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090804 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.ReplicaSet: Get https://localhost:8443/apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=72381&timeout=9m32s&timeoutSeconds=572&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.090854 1 reflector.go:320] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase!%3DFailed%2Cstatus.phase!%3DSucceeded&resourceVersion=72390&timeoutSeconds=409&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused E1205 12:43:40.091706 1 reflector.go:320] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1beta1.PodDisruptionBudget: Get https://localhost:8443/apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=1&timeout=9m1s&timeoutSeconds=541&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler [“c36f5378a1b8”] <== I1205 12:45:17.169311 1 serving.go:312] Generated self-signed cert in-memory W1205 12:45:17.555887 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: “client-ca::kube-system::extension-apiserver-authentication::client-ca-file” due to: configmap “extension-apiserver-authentication” not found W1205 12:45:17.555976 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: “client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file” due to: configmap “extension-apiserver-authentication” not found W1205 12:45:21.334464 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by ‘kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA’ W1205 12:45:21.334500 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps “extension-apiserver-authentication” is forbidden: User “system:kube-scheduler” cannot get resource “configmaps” in API group “” in the namespace “kube-system” W1205 12:45:21.334509 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous. W1205 12:45:21.334514 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false W1205 12:45:21.359516 1 authorization.go:47] Authorization is disabled W1205 12:45:21.359627 1 authentication.go:92] Authentication is disabled I1205 12:45:21.359666 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I1205 12:45:21.363880 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1205 12:45:21.363908 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1205 12:45:21.364235 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I1205 12:45:21.365841 1 tlsconfig.go:219] Starting DynamicServingCertificateController I1205 12:45:21.464209 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I1205 12:45:21.467017 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler… I1205 12:45:39.832092 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <== – Logs begin at Thu 2019-12-05 12:44:31 UTC, end at Thu 2019-12-05 12:54:12 UTC. – Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.418599 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.519116 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.619343 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.719606 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.820906 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:18 development kubelet[5769]: E1205 12:45:18.921340 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.021706 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.122402 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.223414 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.324104 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.424979 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.525522 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.626449 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.727560 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.828864 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:19 development kubelet[5769]: E1205 12:45:19.930480 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.030694 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.132448 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.232780 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.333483 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.434375 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.535380 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.636131 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.736782 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.837026 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:20 development kubelet[5769]: E1205 12:45:20.938087 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:21 development kubelet[5769]: E1205 12:45:21.039551 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:21 development kubelet[5769]: E1205 12:45:21.139758 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:21 development kubelet[5769]: E1205 12:45:21.240277 5769 kubelet.go:2263] node “minikube” not found Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447641 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “coredns-token-tqjjp” (UniqueName: “kubernetes.io/secret/9248326c-1b3b-4b00-bc8d-7aa89849f5af-coredns-token-tqjjp”) pod “coredns-6955765f44-tw2dt” (UID: “9248326c-1b3b-4b00-bc8d-7aa89849f5af”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447690 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “config-volume” (UniqueName: “kubernetes.io/configmap/ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1-config-volume”) pod “coredns-6955765f44-d4g9g” (UID: “ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447707 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “xtables-lock” (UniqueName: “kubernetes.io/host-path/60580c40-e4a2-4c84-a34f-706423195a72-xtables-lock”) pod “kube-proxy-z67hv” (UID: “60580c40-e4a2-4c84-a34f-706423195a72”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447723 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “kube-proxy-token-489wd” (UniqueName: “kubernetes.io/secret/60580c40-e4a2-4c84-a34f-706423195a72-kube-proxy-token-489wd”) pod “kube-proxy-z67hv” (UID: “60580c40-e4a2-4c84-a34f-706423195a72”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447775 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “storage-provisioner-token-z9rwj” (UniqueName: “kubernetes.io/secret/31207f1b-e2e1-4759-9055-147c13c015af-storage-provisioner-token-z9rwj”) pod “storage-provisioner” (UID: “31207f1b-e2e1-4759-9055-147c13c015af”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447817 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “coredns-token-tqjjp” (UniqueName: “kubernetes.io/secret/ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1-coredns-token-tqjjp”) pod “coredns-6955765f44-d4g9g” (UID: “ca0eea27-d79c-4d67-ab0d-0cf0e53efcb1”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447836 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “kube-proxy” (UniqueName: “kubernetes.io/configmap/60580c40-e4a2-4c84-a34f-706423195a72-kube-proxy”) pod “kube-proxy-z67hv” (UID: “60580c40-e4a2-4c84-a34f-706423195a72”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447851 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “lib-modules” (UniqueName: “kubernetes.io/host-path/60580c40-e4a2-4c84-a34f-706423195a72-lib-modules”) pod “kube-proxy-z67hv” (UID: “60580c40-e4a2-4c84-a34f-706423195a72”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447887 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “default-token-zfhq5” (UniqueName: “kubernetes.io/secret/b2ae733d-5b4d-4696-8ae8-adc628af8e56-default-token-zfhq5”) pod “registry-74q4h” (UID: “b2ae733d-5b4d-4696-8ae8-adc628af8e56”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447923 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “config-volume” (UniqueName: “kubernetes.io/configmap/9248326c-1b3b-4b00-bc8d-7aa89849f5af-config-volume”) pod “coredns-6955765f44-tw2dt” (UID: “9248326c-1b3b-4b00-bc8d-7aa89849f5af”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447938 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “default-token-zfhq5” (UniqueName: “kubernetes.io/secret/010e32f8-64f8-4fa5-a087-a6e29befb230-default-token-zfhq5”) pod “registry-proxy-jvwpv” (UID: “010e32f8-64f8-4fa5-a087-a6e29befb230”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.447974 5769 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume “tmp” (UniqueName: “kubernetes.io/host-path/31207f1b-e2e1-4759-9055-147c13c015af-tmp”) pod “storage-provisioner” (UID: “31207f1b-e2e1-4759-9055-147c13c015af”) Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.448070 5769 reconciler.go:156] Reconciler: start to sync state Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.744572 5769 kubelet_node_status.go:112] Node minikube was previously registered Dec 05 12:45:21 development kubelet[5769]: I1205 12:45:21.744728 5769 kubelet_node_status.go:73] Successfully registered node minikube Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.095122 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-6955765f44-d4g9g through plugin: invalid network status for Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.097996 5769 pod_container_deletor.go:75] Container “1ce7ab3aa409a217b54adee61f49b8b09a4a3c01894d61fa25ac866b76124152” not found in pod’s containers Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.161432 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-6955765f44-tw2dt through plugin: invalid network status for Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.165463 5769 pod_container_deletor.go:75] Container “11a14528716739b3d9a45a50df3a08acb9025d48e562bfafefec75bff3616692” not found in pod’s containers Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.177689 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/registry-proxy-jvwpv through plugin: invalid network status for Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.187125 5769 pod_container_deletor.go:75] Container “2f825d31868f22be426984cf1a22ef85708563176264e14f28f06f58f93a26a5” not found in pod’s containers Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.200903 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/registry-74q4h through plugin: invalid network status for Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.211262 5769 pod_container_deletor.go:75] Container “870d789b8d1f4dfbfcd588fb28a73f76c106a08217c0457431f5c333c9bc5f2e” not found in pod’s containers Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.216585 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/registry-74q4h through plugin: invalid network status for Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.225638 5769 pod_container_deletor.go:75] Container “e8992e1c5c8dd33e23be094d5b2cc0ad334dc364ac5fe08d89944115c8f1f4f5” not found in pod’s containers Dec 05 12:45:23 development kubelet[5769]: W1205 12:45:23.239531 5769 pod_container_deletor.go:75] Container “c7fbb0634b303b455a3a264336e4192f5d92449d6bdfb946a9c109e9a7c3efcb” not found in pod’s containers Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.249718 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-6955765f44-tw2dt through plugin: invalid network status for Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.257887 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/registry-proxy-jvwpv through plugin: invalid network status for Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.273402 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/coredns-6955765f44-d4g9g through plugin: invalid network status for Dec 05 12:45:24 development kubelet[5769]: W1205 12:45:24.290181 5769 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn’t find network status for kube-system/registry-74q4h through plugin: invalid network status for Dec 05 12:45:54 development kubelet[5769]: E1205 12:45:54.622700 5769 pod_workers.go:191] Error syncing pod 31207f1b-e2e1-4759-9055-147c13c015af (“storage-provisioner_kube-system(31207f1b-e2e1-4759-9055-147c13c015af)”), skipping: failed to “StartContainer” for “storage-provisioner” with CrashLoopBackOff: “back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(31207f1b-e2e1-4759-9055-147c13c015af)”

==> storage-provisioner [“7684a0b5cb3b”] <==

==> storage-provisioner [“d68791efff23”] <== F1205 12:45:53.616669 1 main.go:37] Error getting server version: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: i/o timeout

The operating system version: macOS Catalina 10.15.1

minikube version

minikube version: v1.6.0-beta.1
commit: a7a5d1e981c85e97f35be38ff5cb8f510a570eea-dirty

vmrun --help

vmrun version 1.17.0 build-15018442

docker (minikube) info

Client:
 Debug Mode: false

Server:
 Containers: 44
  Running: 22
  Paused: 0
  Stopped: 22
 Images: 24
 Server Version: 19.03.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version:
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.19.81
 Operating System: Buildroot 2019.02.7
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 15.66GiB
 Name: development
 ID: WWII:LKCO:Q32Y:XZL5:FALW:M4XN:HAO6:NWU7:BNRL:JN6I:3NY5:RUWK
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
  provider=vmware
 Experimental: false
 Insecure Registries:
  10.96.0.0/12
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

docker (for mac) info

Client:
 Debug Mode: false

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 1
 Server Version: 19.03.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
 runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
 init version: fec3683
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 4.9.184-linuxkit
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 6
 Total Memory: 1.952GiB
 Name: docker-desktop
 ID: 7WZI:OULF:7HCR:5XXT:6ICA:GB4V:XRZF:FB3Q:MUPL:J2K2:7GBZ:4ROZ
 Docker Root Dir: /var/lib/docker
 Debug Mode: true
  File Descriptors: 31
  Goroutines: 51
  System Time: 2019-12-05T13:00:42.572521705Z
  EventsListeners: 2
 HTTP Proxy: gateway.docker.internal:3128
 HTTPS Proxy: gateway.docker.internal:3129
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 19 (1 by maintainers)

Most upvoted comments

The wget seems fine, but apt-key is doing something strange when running in the container.

For some reason it is trying to close every file description under the sun, way over the usual…

[pid    21] fcntl(3, F_SETFD, FD_CLOEXEC) = 0
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(4, F_SETFD, FD_CLOEXEC) = 0
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(5, F_SETFD, FD_CLOEXEC) = 0
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(6, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor)
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(7, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor)
[pid    21] getrlimit(RLIMIT_NOFILE, 0x7ffd94eae7f0) = 0
[pid    21] fcntl(8, F_SETFD, FD_CLOEXEC) = -1 EBADF (Bad file descriptor)
...

https://packages.debian.org/jessie/apt

https://salsa.debian.org/apt-team/apt/-/blob/debian/jessie/apt-pkg/contrib/fileutl.cc

      // Close all of our FDs - just in case
      for (int K = 3; K != sysconf(_SC_OPEN_MAX); K++)
      {
	 if(KeepFDs.find(K) == KeepFDs.end())
	    fcntl(K,F_SETFD,FD_CLOEXEC);
      }
   }

Unfortunately, it never checks the return value of sysconf. Like if it is -1 or so.

Or if it returns a really big number. Say for instance 1048576, like in Docker…