minikube: Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition

Steps to reproduce the issue:

I am following this blogpost to setup Kubernetes and Minikube on WSL2 on Windows 10.

OS: Windows 10 Version 1909 build 18363.1256 WSL2 distro: Ubuntu 20.04 LTS

$ minikube version
minikube version: v1.17.1
commit: 043bdca07e54ab6e4fc0457e3064048f34133d7e
$ docker version
Client: Docker Engine - Community
 Cloud integration: 1.0.4
 Version:           20.10.0
 API version:       1.41
 Go version:        go1.13.15
 Git commit:        7287ab3
 Built:             Tue Dec  8 18:59:53 2020
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.0
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       eeddea2
  Built:            Tue Dec  8 18:58:04 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          v1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:50:19Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.49.2:8443: i/o timeout

I can follow the above mentioned blogpost until Minikube: the first cluster.

Instead of sudo minikube start --driver=none I just do minikube start, but I get the following error:


$ minikube start
πŸ˜„  minikube v1.17.1 on Ubuntu 20.04
✨  Automatically selected the docker driver
πŸ‘  Starting control plane node minikube in cluster minikube
πŸ’Ύ  Downloading Kubernetes v1.20.2 preload ...
    > preloaded-images-k8s-v8-v1....: 491.22 MiB / 491.22 MiB  100.00% 7.38 MiB
πŸ”₯  Creating docker container (CPUs=2, Memory=6300MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
    β–ͺ Generating certificates and keys ...
    β–ͺ Booting up control plane ...
    β–ͺ Configuring RBAC rules ...
πŸ”Ž  Verifying Kubernetes components...
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout]
🌟  Enabled addons: storage-provisioner

❌  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: timed out waiting for the condition

😿  If the above advice does not help, please let us know:
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose

This seems a similar error as in #9556, so to answer a few questions from there as well:

  1. do you happen to use a VPN or a proxy ? In a previous step when following the blogpost, a proxy was set up to access a dashboard.

  2. do you have a custom DOCKER_HOST echo $DOCKER_HOST retuns an empty string

  3. do you mind sharing env | grep DOCKER env | grep DOCKER returns nothing

Full output of failed command ( minikube start with β€œβ€“alsologtostderr” flag) minikube.log

Optional: Full output of minikube logs command: minkube2.log

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 15 (3 by maintainers)

Most upvoted comments

Hi @hardik-dadhich, can you try running minikube delete --all --purge and then starting minikube again after, I believe that should solve the issue you’re facing.

Yeah properly supporting WSL with the docker and none drivers is in our roadmap for this year.

@spowelljr I upgraded minikube from v1.17.1 to v1.19.0 and tried again, but I still have the same error.

A ticket is opened (https://github.com/kubernetes/website/issues/26998) to update the blog post that I am following to show the correct commands to install minikube on WSL2. Once that issue gets resolved I will report back here.

For completness sake, the output of minikube start --alsologtostderr -v=4:

Details
I0426 11:22:39.129761    8357 out.go:278] Setting OutFile to fd 1 ...
I0426 11:22:39.129872    8357 out.go:330] isatty.IsTerminal(1) = true
I0426 11:22:39.129904    8357 out.go:291] Setting ErrFile to fd 2...
I0426 11:22:39.129934    8357 out.go:330] isatty.IsTerminal(2) = true
I0426 11:22:39.130059    8357 root.go:317] Updating PATH: /home/jervan/.minikube/bin
I0426 11:22:39.130271    8357 out.go:285] Setting JSON to false
I0426 11:22:39.130836    8357 start.go:108] hostinfo: {"hostname":"KORCLT38593","uptime":9976,"bootTime":1619418983,"procs":24,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.72-microsoft-standard-WSL2","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"7a4cfed6-ab7e-e027-cfca-57f56019136b"}
I0426 11:22:39.130919    8357 start.go:118] virtualization:
I0426 11:22:39.133747    8357 out.go:157] πŸ˜„  minikube v1.19.0 on Ubuntu 20.04
πŸ˜„  minikube v1.19.0 on Ubuntu 20.04
I0426 11:22:39.133996    8357 notify.go:126] Checking for updates...
I0426 11:22:39.134180    8357 driver.go:322] Setting default libvirt URI to qemu:///system
I0426 11:22:39.265049    8357 docker.go:119] docker version: linux-20.10.0
I0426 11:22:39.265162    8357 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0426 11:22:39.467561    8357 info.go:261] docker info: {ID:JL3K:WVKO:J7EF:2FJM:XLYM:2YBF:SHAQ:EZUF:JJ2W:2NPI:O6QM:56ZT Containers:14 ContainersRunning:6 ContainersPaused:0 ContainersStopped:8 Images:17 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:80 SystemTime:2021-04-26 09:22:39.3669773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.4.72-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:26703204352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[WARNING: No blkio weight support WARNING: No blkio weight_device support WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.4.2-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
I0426 11:22:39.467670    8357 docker.go:225] overlay module found
I0426 11:22:39.470889    8357 out.go:157] ✨  Using the docker driver based on existing profile
✨  Using the docker driver based on existing profile
I0426 11:22:39.470941    8357 start.go:276] selected driver: docker
I0426 11:22:39.470946    8357 start.go:718] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:6300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0426 11:22:39.471048    8357 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0426 11:22:39.471353    8357 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I0426 11:22:39.674040    8357 info.go:261] docker info: {ID:JL3K:WVKO:J7EF:2FJM:XLYM:2YBF:SHAQ:EZUF:JJ2W:2NPI:O6QM:56ZT Containers:14 ContainersRunning:6 ContainersPaused:0 ContainersStopped:8 Images:17 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:91 OomKillDisable:true NGoroutines:80 SystemTime:2021-04-26 09:22:39.5715841 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.4.72-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:26703204352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[WARNING: No blkio weight support WARNING: No blkio weight_device support WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.4.2-docker] map[Name:scan Path:/usr/local/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.5.0]] Warnings:<nil>}}
I0426 11:22:39.675795    8357 start_flags.go:270] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:6300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0426 11:22:39.683388    8357 out.go:157] πŸ‘  Starting control plane node minikube in cluster minikube
πŸ‘  Starting control plane node minikube in cluster minikube
I0426 11:22:39.683546    8357 image.go:107] Checking for gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon
I0426 11:22:39.845426    8357 image.go:111] Found gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 in local docker daemon, skipping pull
I0426 11:22:39.845482    8357 cache.go:116] gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 exists in daemon, skipping pull
I0426 11:22:39.845524    8357 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0426 11:22:39.845577    8357 preload.go:105] Found local preload: /home/jervan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0426 11:22:39.845618    8357 cache.go:54] Caching tarball of preloaded images
I0426 11:22:39.845668    8357 preload.go:131] Found /home/jervan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0426 11:22:39.845712    8357 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on docker
I0426 11:22:39.845843    8357 profile.go:148] Saving config to /home/jervan/.minikube/profiles/minikube/config.json ...
I0426 11:22:39.846038    8357 cache.go:185] Successfully downloaded all kic artifacts
I0426 11:22:39.846117    8357 start.go:313] acquiring machines lock for minikube: {Name:mk1a3b22b413de49c894457e3f7d0a1c11db5719 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0426 11:22:39.846319    8357 start.go:317] acquired machines lock for "minikube" in 157.4Β΅s
I0426 11:22:39.846364    8357 start.go:93] Skipping create...Using existing machine configuration
I0426 11:22:39.846396    8357 fix.go:55] fixHost starting:
I0426 11:22:39.846567    8357 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0426 11:22:40.000808    8357 fix.go:108] recreateIfNeeded on minikube: state=Running err=<nil>
W0426 11:22:40.000889    8357 fix.go:134] unexpected machine state, will restart: <nil>
I0426 11:22:40.004089    8357 out.go:157] πŸƒ  Updating the running docker "minikube" container ...
πŸƒ  Updating the running docker "minikube" container ...
I0426 11:22:40.004154    8357 machine.go:88] provisioning docker machine ...
I0426 11:22:40.004186    8357 ubuntu.go:169] provisioning hostname "minikube"
I0426 11:22:40.004256    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:40.152548    8357 main.go:126] libmachine: Using SSH client type: native
I0426 11:22:40.152859    8357 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x800360] 0x800320 <nil>  [] 0s} 127.0.0.1 49162 <nil> <nil>}
I0426 11:22:40.152896    8357 main.go:126] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0426 11:22:40.276875    8357 main.go:126] libmachine: SSH cmd err, output: <nil>: minikube

I0426 11:22:40.276950    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:40.425996    8357 main.go:126] libmachine: Using SSH client type: native
I0426 11:22:40.426156    8357 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x800360] 0x800320 <nil>  [] 0s} 127.0.0.1 49162 <nil> <nil>}
I0426 11:22:40.426200    8357 main.go:126] libmachine: About to run SSH command:

                if ! grep -xq '.*\sminikube' /etc/hosts; then
                        if grep -xq '127.0.1.1\s.*' /etc/hosts; then
                                sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
                        else
                                echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
                        fi
                fi
I0426 11:22:40.538452    8357 main.go:126] libmachine: SSH cmd err, output: <nil>:
I0426 11:22:40.538497    8357 ubuntu.go:175] set auth options {CertDir:/home/jervan/.minikube CaCertPath:/home/jervan/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jervan/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jervan/.minikube/machines/server.pem ServerKeyPath:/home/jervan/.minikube/machines/server-key.pem ClientKeyPath:/home/jervan/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jervan/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jervan/.minikube}
I0426 11:22:40.538559    8357 ubuntu.go:177] setting up certificates
I0426 11:22:40.538570    8357 provision.go:83] configureAuth start
I0426 11:22:40.538637    8357 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0426 11:22:40.711607    8357 provision.go:137] copyHostCerts
I0426 11:22:40.711694    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/certs/cert.pem -> /home/jervan/.minikube/cert.pem
I0426 11:22:40.711787    8357 exec_runner.go:145] found /home/jervan/.minikube/cert.pem, removing ...
I0426 11:22:40.711829    8357 exec_runner.go:190] rm: /home/jervan/.minikube/cert.pem
I0426 11:22:40.711933    8357 exec_runner.go:152] cp: /home/jervan/.minikube/certs/cert.pem --> /home/jervan/.minikube/cert.pem (1119 bytes)
I0426 11:22:40.712056    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/certs/key.pem -> /home/jervan/.minikube/key.pem
I0426 11:22:40.712111    8357 exec_runner.go:145] found /home/jervan/.minikube/key.pem, removing ...
I0426 11:22:40.712157    8357 exec_runner.go:190] rm: /home/jervan/.minikube/key.pem
I0426 11:22:40.712226    8357 exec_runner.go:152] cp: /home/jervan/.minikube/certs/key.pem --> /home/jervan/.minikube/key.pem (1675 bytes)
I0426 11:22:40.712314    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/certs/ca.pem -> /home/jervan/.minikube/ca.pem
I0426 11:22:40.712370    8357 exec_runner.go:145] found /home/jervan/.minikube/ca.pem, removing ...
I0426 11:22:40.712457    8357 exec_runner.go:190] rm: /home/jervan/.minikube/ca.pem
I0426 11:22:40.712543    8357 exec_runner.go:152] cp: /home/jervan/.minikube/certs/ca.pem --> /home/jervan/.minikube/ca.pem (1078 bytes)
I0426 11:22:40.712636    8357 provision.go:111] generating server cert: /home/jervan/.minikube/machines/server.pem ca-key=/home/jervan/.minikube/certs/ca.pem private-key=/home/jervan/.minikube/certs/ca-key.pem org=jervan.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I0426 11:22:40.788173    8357 provision.go:165] copyRemoteCerts
I0426 11:22:40.788251    8357 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0426 11:22:40.788310    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:40.939617    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/jervan/.minikube/machines/minikube/id_rsa Username:docker}
I0426 11:22:41.022345    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/certs/ca.pem -> /etc/docker/ca.pem
I0426 11:22:41.022406    8357 ssh_runner.go:316] scp /home/jervan/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
I0426 11:22:41.041133    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/machines/server.pem -> /etc/docker/server.pem
I0426 11:22:41.041191    8357 ssh_runner.go:316] scp /home/jervan/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
I0426 11:22:41.059488    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
I0426 11:22:41.059547    8357 ssh_runner.go:316] scp /home/jervan/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0426 11:22:41.077119    8357 provision.go:86] duration metric: configureAuth took 538.5197ms
I0426 11:22:41.077170    8357 ubuntu.go:193] setting minikube options for container-runtime
I0426 11:22:41.077332    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:41.225557    8357 main.go:126] libmachine: Using SSH client type: native
I0426 11:22:41.225712    8357 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x800360] 0x800320 <nil>  [] 0s} 127.0.0.1 49162 <nil> <nil>}
I0426 11:22:41.225751    8357 main.go:126] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0426 11:22:41.349145    8357 main.go:126] libmachine: SSH cmd err, output: <nil>: overlay

I0426 11:22:41.349188    8357 ubuntu.go:71] root file system type: overlay
I0426 11:22:41.349325    8357 provision.go:296] Updating docker unit: /lib/systemd/system/docker.service ...
I0426 11:22:41.349413    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:41.511861    8357 main.go:126] libmachine: Using SSH client type: native
I0426 11:22:41.512032    8357 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x800360] 0x800320 <nil>  [] 0s} 127.0.0.1 49162 <nil> <nil>}
I0426 11:22:41.512116    8357 main.go:126] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP \$MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0426 11:22:41.647666    8357 main.go:126] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
StartLimitBurst=3
StartLimitIntervalSec=60

[Service]
Type=notify
Restart=on-failure



# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

I0426 11:22:41.647817    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:41.805720    8357 main.go:126] libmachine: Using SSH client type: native
I0426 11:22:41.805941    8357 main.go:126] libmachine: &{{{<nil> 0 [] [] []} docker [0x800360] 0x800320 <nil>  [] 0s} 127.0.0.1 49162 <nil> <nil>}
I0426 11:22:41.805965    8357 main.go:126] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0426 11:22:41.923155    8357 main.go:126] libmachine: SSH cmd err, output: <nil>:
I0426 11:22:41.923213    8357 machine.go:91] provisioned docker machine in 1.9190382s
I0426 11:22:41.923269    8357 start.go:267] post-start starting for "minikube" (driver="docker")
I0426 11:22:41.923302    8357 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0426 11:22:41.923343    8357 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0426 11:22:41.923397    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:42.077411    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/jervan/.minikube/machines/minikube/id_rsa Username:docker}
I0426 11:22:42.119277    8357 ssh_runner.go:149] Run: cat /etc/os-release
I0426 11:22:42.122266    8357 main.go:126] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0426 11:22:42.122305    8357 main.go:126] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0426 11:22:42.122314    8357 main.go:126] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0426 11:22:42.122333    8357 info.go:137] Remote host: Ubuntu 20.04.1 LTS
I0426 11:22:42.122367    8357 filesync.go:118] Scanning /home/jervan/.minikube/addons for local assets ...
I0426 11:22:42.122439    8357 filesync.go:118] Scanning /home/jervan/.minikube/files for local assets ...
I0426 11:22:42.122475    8357 start.go:270] post-start completed in 199.1729ms
I0426 11:22:42.122537    8357 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0426 11:22:42.122602    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:42.284753    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/jervan/.minikube/machines/minikube/id_rsa Username:docker}
I0426 11:22:42.369002    8357 fix.go:57] fixHost completed within 2.5226041s
I0426 11:22:42.369045    8357 start.go:80] releasing machines lock for "minikube", held for 2.5226902s
I0426 11:22:42.369127    8357 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I0426 11:22:42.566216    8357 ssh_runner.go:149] Run: systemctl --version
I0426 11:22:42.566506    8357 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
I0426 11:22:42.566542    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:42.566945    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:22:42.781338    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/jervan/.minikube/machines/minikube/id_rsa Username:docker}
I0426 11:22:42.781606    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/jervan/.minikube/machines/minikube/id_rsa Username:docker}
I0426 11:22:42.890656    8357 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
I0426 11:22:43.114269    8357 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0426 11:22:43.132277    8357 cruntime.go:219] skipping containerd shutdown because we are bound to it
I0426 11:22:43.132363    8357 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
I0426 11:22:43.142782    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
" | sudo tee /etc/crictl.yaml"
I0426 11:22:43.155943    8357 ssh_runner.go:149] Run: sudo systemctl cat docker.service
I0426 11:22:43.164906    8357 ssh_runner.go:149] Run: sudo systemctl daemon-reload
I0426 11:22:43.234750    8357 ssh_runner.go:149] Run: sudo systemctl start docker
I0426 11:22:43.244286    8357 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
I0426 11:22:43.283089    8357 out.go:184] 🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.5 ...I0426 11:22:43.283193    8357 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
/ I0426 11:22:43.430204    8357 ssh_runner.go:149] Run: grep 192.168.49.1       host.minikube.internal$ /etc/hosts
I0426 11:22:43.433985    8357 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker
I0426 11:22:43.434049    8357 preload.go:105] Found local preload: /home/jervan/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-docker-overlay2-amd64.tar.lz4
I0426 11:22:43.434125    8357 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
I0426 11:22:43.463640    8357 docker.go:455] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0426 11:22:43.463701    8357 docker.go:392] Images already preloaded, skipping extraction
I0426 11:22:43.463786    8357 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
- I0426 11:22:43.493932    8357 docker.go:455] Got preloaded images: -- stdout --
gcr.io/k8s-minikube/storage-provisioner:v5
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/kube-scheduler:v1.20.2
kubernetesui/dashboard:v2.1.0
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
kubernetesui/metrics-scraper:v1.0.4
k8s.gcr.io/pause:3.2

-- /stdout --
I0426 11:22:43.493985    8357 cache_images.go:74] Images are preloaded, skipping loading
I0426 11:22:43.494072    8357 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
I0426 11:22:43.559541    8357 cni.go:81] Creating CNI manager for ""
I0426 11:22:43.559581    8357 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0426 11:22:43.559604    8357 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I0426 11:22:43.559637    8357 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I0426 11:22:43.559765    8357 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.49.2
  bindPort: 8443
bootstrapTokens:
  - groups:
      - system:bootstrappers:kubeadm:default-node-token
    ttl: 24h0m0s
    usages:
      - signing
      - authentication
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: "minikube"
  kubeletExtraArgs:
    node-ip: 192.168.49.2
  taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
  extraArgs:
    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
  extraArgs:
    allocate-node-cidrs: "true"
    leader-elect: "false"
scheduler:
  extraArgs:
    leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/minikube/etcd
    extraArgs:
      proxy-refresh-interval: "70000"
kubernetesVersion: v1.20.2
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  x509:
    clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: cgroupfs
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
  nodefs.available: "0%"
  nodefs.inodesFree: "0%"
  imagefs.available: "0%"
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249

I0426 11:22:43.559872    8357 kubeadm.go:897] kubelet [Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2

[Install]
 config:
{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I0426 11:22:43.559945    8357 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
I0426 11:22:43.567138    8357 binaries.go:44] Found k8s binaries, skipping transfer
I0426 11:22:43.567211    8357 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I0426 11:22:43.575157    8357 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
\ I0426 11:22:43.587751    8357 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0426 11:22:43.599987    8357 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1840 bytes)
I0426 11:22:43.612907    8357 ssh_runner.go:149] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I0426 11:22:43.615911    8357 certs.go:52] Setting up /home/jervan/.minikube/profiles/minikube for IP: 192.168.49.2
I0426 11:22:43.615968    8357 certs.go:171] skipping minikubeCA CA generation: /home/jervan/.minikube/ca.key
I0426 11:22:43.615993    8357 certs.go:171] skipping proxyClientCA CA generation: /home/jervan/.minikube/proxy-client-ca.key
I0426 11:22:43.616056    8357 certs.go:282] skipping minikube-user signed cert generation: /home/jervan/.minikube/profiles/minikube/client.key
I0426 11:22:43.616097    8357 certs.go:282] skipping minikube signed cert generation: /home/jervan/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I0426 11:22:43.616140    8357 certs.go:282] skipping aggregator signed cert generation: /home/jervan/.minikube/profiles/minikube/proxy-client.key
I0426 11:22:43.616171    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/profiles/minikube/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0426 11:22:43.616196    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/profiles/minikube/apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0426 11:22:43.616230    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/profiles/minikube/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0426 11:22:43.616267    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/profiles/minikube/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0426 11:22:43.616301    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
I0426 11:22:43.616335    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
I0426 11:22:43.616357    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0426 11:22:43.616406    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0426 11:22:43.616463    8357 certs.go:361] found cert: /home/jervan/.minikube/certs/home/jervan/.minikube/certs/ca-key.pem (1675 bytes)
I0426 11:22:43.616514    8357 certs.go:361] found cert: /home/jervan/.minikube/certs/home/jervan/.minikube/certs/ca.pem (1078 bytes)
I0426 11:22:43.616560    8357 certs.go:361] found cert: /home/jervan/.minikube/certs/home/jervan/.minikube/certs/cert.pem (1119 bytes)
I0426 11:22:43.616600    8357 certs.go:361] found cert: /home/jervan/.minikube/certs/home/jervan/.minikube/certs/key.pem (1675 bytes)
I0426 11:22:43.616649    8357 vm_assets.go:96] NewFileAsset: /home/jervan/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0426 11:22:43.617343    8357 ssh_runner.go:316] scp /home/jervan/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I0426 11:22:43.634408    8357 ssh_runner.go:316] scp /home/jervan/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I0426 11:22:43.652746    8357 ssh_runner.go:316] scp /home/jervan/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0426 11:22:43.671523    8357 ssh_runner.go:316] scp /home/jervan/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
| I0426 11:22:43.691708    8357 ssh_runner.go:316] scp /home/jervan/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0426 11:22:43.710162    8357 ssh_runner.go:316] scp /home/jervan/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0426 11:22:43.728239    8357 ssh_runner.go:316] scp /home/jervan/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0426 11:22:43.745670    8357 ssh_runner.go:316] scp /home/jervan/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0426 11:22:43.762568    8357 ssh_runner.go:316] scp /home/jervan/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0426 11:22:43.780158    8357 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (740 bytes)
/ I0426 11:22:43.792697    8357 ssh_runner.go:149] Run: openssl version
I0426 11:22:43.797122    8357 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0426 11:22:43.805240    8357 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0426 11:22:43.808820    8357 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 Apr 26 08:48 /usr/share/ca-certificates/minikubeCA.pem
I0426 11:22:43.808879    8357 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0426 11:22:43.813247    8357 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0426 11:22:43.821072    8357 kubeadm.go:386] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.20@sha256:0250dab3644403384bd54f566921c6b57138eecffbb861f9392feef9b2ec44f6 Memory:6300 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
I0426 11:22:43.821194    8357 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0426 11:22:43.852241    8357 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I0426 11:22:43.859605    8357 kubeadm.go:397] found existing configuration files, will attempt cluster restart
I0426 11:22:43.859667    8357 kubeadm.go:596] restartCluster start
I0426 11:22:43.859701    8357 ssh_runner.go:149] Run: sudo test -d /data/minikube
I0426 11:22:43.866576    8357 kubeadm.go:126] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
stdout:

stderr:
I0426 11:22:43.867344    8357 kubeconfig.go:93] found "minikube" server: "https://192.168.49.2:8443"
I0426 11:22:43.869109    8357 kapi.go:59] client config for minikube: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jervan/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jervan/.minikube/profiles/minikube/client.key", CAFile:"/home/jervan/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a71760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0426 11:22:43.869572    8357 cert_rotation.go:137] Starting client certificate rotation controller
I0426 11:22:43.870160    8357 ssh_runner.go:149] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
I0426 11:22:43.877827    8357 api_server.go:146] Checking apiserver status ...
I0426 11:22:43.877894    8357 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
- I0426 11:22:43.893532    8357 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1920/cgroup
I0426 11:22:43.901403    8357 api_server.go:162] apiserver freezer: "20:freezer:/docker/6f12f9a37b9e0559740ac958a2d9f28bdc92ab3f9a45cbe3df22add8bd865869/kubepods/burstable/podc767dbeb9ddd2d01964c2fc02c621c4e/26803d9e237c4a6fa043b7dae216934173d01668a8eaaf7da3e6f860b9a9662d"
I0426 11:22:43.901503    8357 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/6f12f9a37b9e0559740ac958a2d9f28bdc92ab3f9a45cbe3df22add8bd865869/kubepods/burstable/podc767dbeb9ddd2d01964c2fc02c621c4e/26803d9e237c4a6fa043b7dae216934173d01668a8eaaf7da3e6f860b9a9662d/freezer.state
I0426 11:22:43.908612    8357 api_server.go:184] freezer state: "THAWED"
I0426 11:22:43.908660    8357 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
| I0426 11:24:53.684992    8357 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection timed out
I0426 11:24:53.685068    8357 kubeadm.go:575] needs reconfigure: apiserver in state Stopped
I0426 11:24:53.685093    8357 kubeadm.go:1020] stopping kube-system containers ...
I0426 11:24:53.685204    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
/ I0426 11:24:53.716427    8357 docker.go:293] Stopping containers: [5d01245c0a14 e540090f8da4 d2e34314d3d7 3948d79290a2 a0436291ae35 84abef2c2cf6 5eb185af9240 66d78a2ce48e f6273bfc6ca9 99f6b32970b2 26803d9e237c c0c33b716438 5a6fdb9838a5 0c86d24542e9 eacb672dbd6d d087acedd2d2]
I0426 11:24:53.716506    8357 ssh_runner.go:149] Run: docker stop 5d01245c0a14 e540090f8da4 d2e34314d3d7 3948d79290a2 a0436291ae35 84abef2c2cf6 5eb185af9240 66d78a2ce48e f6273bfc6ca9 99f6b32970b2 26803d9e237c c0c33b716438 5a6fdb9838a5 0c86d24542e9 eacb672dbd6d d087acedd2d2
| I0426 11:24:58.871549    8357 ssh_runner.go:189] Completed: docker stop 5d01245c0a14 e540090f8da4 d2e34314d3d7 3948d79290a2 a0436291ae35 84abef2c2cf6 5eb185af9240 66d78a2ce48e f6273bfc6ca9 99f6b32970b2 26803d9e237c c0c33b716438 5a6fdb9838a5 0c86d24542e9 eacb672dbd6d d087acedd2d2: (5.1549929s)
I0426 11:24:58.871637    8357 ssh_runner.go:149] Run: sudo systemctl stop kubelet
I0426 11:24:58.930858    8357 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0426 11:24:58.939801    8357 kubeadm.go:154] found existing configuration files:
-rw------- 1 root root 5615 Apr 26 09:12 /etc/kubernetes/admin.conf
-rw------- 1 root root 5632 Apr 26 09:12 /etc/kubernetes/controller-manager.conf
-rw------- 1 root root 1971 Apr 26 09:12 /etc/kubernetes/kubelet.conf
-rw------- 1 root root 5576 Apr 26 09:12 /etc/kubernetes/scheduler.conf

I0426 11:24:58.939883    8357 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
/ I0426 11:24:58.947744    8357 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
I0426 11:24:58.955474    8357 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
I0426 11:24:58.962825    8357 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
stdout:

stderr:
I0426 11:24:58.962919    8357 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
I0426 11:24:58.970822    8357 ssh_runner.go:149] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
I0426 11:24:58.978899    8357 kubeadm.go:165] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
stdout:

stderr:
I0426 11:24:58.979006    8357 ssh_runner.go:149] Run: sudo rm -f /etc/kubernetes/scheduler.conf
I0426 11:24:58.986628    8357 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0426 11:24:58.993660    8357 kubeadm.go:672] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
I0426 11:24:58.993715    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
\ I0426 11:24:59.171066    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
\ I0426 11:25:00.039195    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
- I0426 11:25:00.345706    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
| I0426 11:25:00.536178    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
\ I0426 11:25:00.759271    8357 api_server.go:48] waiting for apiserver process to appear ...
I0426 11:25:00.759338    8357 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0426 11:25:00.776086    8357 api_server.go:68] duration metric: took 16.8161ms to wait for apiserver process to appear ...
I0426 11:25:00.776130    8357 api_server.go:84] waiting for apiserver healthz status ...
I0426 11:25:00.776192    8357 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
\ I0426 11:27:11.915020    8357 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection timed out
| I0426 11:27:12.416248    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0426 11:27:12.446507    8357 logs.go:256] 2 containers: [6632851be9d3 26803d9e237c]
I0426 11:27:12.446585    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
/ I0426 11:27:12.475152    8357 logs.go:256] 2 containers: [3ee27c87a397 c0c33b716438]
I0426 11:27:12.475247    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0426 11:27:12.502664    8357 logs.go:256] 4 containers: [04114f628189 ec01d791d166 d2e34314d3d7 3948d79290a2]
I0426 11:27:12.502748    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0426 11:27:12.531999    8357 logs.go:256] 2 containers: [45a56499ae06 f6273bfc6ca9]
I0426 11:27:12.532071    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0426 11:27:12.561477    8357 logs.go:256] 2 containers: [19c40ee85c46 5eb185af9240]
I0426 11:27:12.561549    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
- I0426 11:27:12.589933    8357 logs.go:256] 0 containers: []
W0426 11:27:12.589983    8357 logs.go:258] No container was found matching "kubernetes-dashboard"
I0426 11:27:12.590029    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0426 11:27:12.617790    8357 logs.go:256] 2 containers: [82a20bf1e446 5d01245c0a14]
I0426 11:27:12.617876    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0426 11:27:12.645717    8357 logs.go:256] 2 containers: [7e8f49b5c2be 99f6b32970b2]
I0426 11:27:12.645770    8357 logs.go:122] Gathering logs for kube-controller-manager [99f6b32970b2] ...
I0426 11:27:12.645780    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 99f6b32970b2"
\ I0426 11:27:12.689112    8357 logs.go:122] Gathering logs for kube-apiserver [26803d9e237c] ...
I0426 11:27:12.689190    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 26803d9e237c"
I0426 11:27:12.743292    8357 logs.go:122] Gathering logs for etcd [3ee27c87a397] ...
I0426 11:27:12.743349    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3ee27c87a397"
| I0426 11:27:12.774270    8357 logs.go:122] Gathering logs for etcd [c0c33b716438] ...
I0426 11:27:12.774297    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c0c33b716438"
I0426 11:27:12.809190    8357 logs.go:122] Gathering logs for coredns [ec01d791d166] ...
I0426 11:27:12.809246    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 ec01d791d166"
I0426 11:27:12.840509    8357 logs.go:122] Gathering logs for coredns [3948d79290a2] ...
I0426 11:27:12.840555    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 3948d79290a2"
I0426 11:27:12.871185    8357 logs.go:122] Gathering logs for Docker ...
I0426 11:27:12.871231    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
/ I0426 11:27:12.887335    8357 logs.go:122] Gathering logs for kubelet ...
I0426 11:27:12.887381    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0426 11:27:12.943545    8357 logs.go:122] Gathering logs for dmesg ...
I0426 11:27:12.943606    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0426 11:27:12.957585    8357 logs.go:122] Gathering logs for coredns [d2e34314d3d7] ...
I0426 11:27:12.957633    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 d2e34314d3d7"
- I0426 11:27:12.987259    8357 logs.go:122] Gathering logs for kube-scheduler [45a56499ae06] ...
I0426 11:27:12.987307    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 45a56499ae06"
I0426 11:27:13.016815    8357 logs.go:122] Gathering logs for kube-proxy [19c40ee85c46] ...
I0426 11:27:13.016862    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 19c40ee85c46"
I0426 11:27:13.047699    8357 logs.go:122] Gathering logs for kube-proxy [5eb185af9240] ...
I0426 11:27:13.047746    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 5eb185af9240"
\ I0426 11:27:13.079167    8357 logs.go:122] Gathering logs for storage-provisioner [82a20bf1e446] ...
I0426 11:27:13.079213    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 82a20bf1e446"
I0426 11:27:13.112607    8357 logs.go:122] Gathering logs for storage-provisioner [5d01245c0a14] ...
I0426 11:27:13.112659    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 5d01245c0a14"
I0426 11:27:13.144676    8357 logs.go:122] Gathering logs for describe nodes ...
I0426 11:27:13.144726    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
- I0426 11:27:13.409117    8357 logs.go:122] Gathering logs for kube-apiserver [6632851be9d3] ...
I0426 11:27:13.409175    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 6632851be9d3"
I0426 11:27:13.446986    8357 logs.go:122] Gathering logs for kube-controller-manager [7e8f49b5c2be] ...
I0426 11:27:13.447041    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 7e8f49b5c2be"
\ I0426 11:27:13.485443    8357 logs.go:122] Gathering logs for container status ...
I0426 11:27:13.485490    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0426 11:27:13.502348    8357 logs.go:122] Gathering logs for coredns [04114f628189] ...
I0426 11:27:13.502394    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 04114f628189"
I0426 11:27:13.532659    8357 logs.go:122] Gathering logs for kube-scheduler [f6273bfc6ca9] ...
I0426 11:27:13.532715    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 f6273bfc6ca9"
| I0426 11:27:16.065332    8357 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
- I0426 11:29:30.155062    8357 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection timed out
/ I0426 11:29:30.415664    8357 kubeadm.go:600] restartCluster took 6m46.5559862s
W0426 11:29:30.415834    8357 out.go:222] 🀦  Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check

🀦  Unable to restart cluster, will reset it: apiserver health: apiserver healthz never reported healthy: cluster wait timed out during healthz check
I0426 11:29:30.415908    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force"
I0426 11:29:54.108787    8357 ssh_runner.go:189] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (23.6928381s)
I0426 11:29:54.108866    8357 ssh_runner.go:149] Run: sudo systemctl stop -f kubelet
I0426 11:29:54.119269    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
I0426 11:29:54.149807    8357 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I0426 11:29:54.157421    8357 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I0426 11:29:54.157502    8357 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I0426 11:29:54.164456    8357 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:

stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I0426 11:29:54.164510    8357 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I0426 11:29:54.748026    8357 out.go:184]     β–ͺ Generating certificates and keys ...
    β–ͺ Generating certificates and keys ...\ I0426 11:29:55.521719    8357 out.go:184]     β–ͺ Booting up control plane ...

    β–ͺ Booting up control plane ...\ I0426 11:30:09.560065    8357 out.go:184]     β–ͺ Configuring RBAC rules ...

    β–ͺ Configuring RBAC rules ...\ I0426 11:30:09.975795    8357 cni.go:81] Creating CNI manager for ""
I0426 11:30:09.975848    8357 cni.go:153] CNI unnecessary in this configuration, recommending no CNI
I0426 11:30:09.975896    8357 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I0426 11:30:09.975965    8357 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I0426 11:30:09.976009    8357 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.19.0 minikube.k8s.io/commit=15cede53bdc5fe242228853e737333b09d4336b5 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_04_26T11_30_09_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
| I0426 11:30:10.042860    8357 ops.go:34] apiserver oom_adj: -16
/ I0426 11:30:10.141980    8357 kubeadm.go:973] duration metric: took 166.0645ms to wait for elevateKubeSystemPrivileges.
I0426 11:30:10.142044    8357 kubeadm.go:388] StartCluster complete in 7m26.3209766s
I0426 11:30:10.142090    8357 settings.go:142] acquiring lock: {Name:mkb476309b92a734ca1cd9b20af4df3dde2265fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0426 11:30:10.142293    8357 settings.go:150] Updating kubeconfig:  /home/jervan/.kube/config
I0426 11:30:10.143262    8357 lock.go:36] WriteFile acquiring /home/jervan/.kube/config: {Name:mk1416ebb7846af0da642f12eb2c02964b83dd13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0426 11:30:10.145722    8357 kapi.go:59] client config for minikube: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jervan/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jervan/.minikube/profiles/minikube/client.key", CAFile:"/home/jervan/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a71760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
\ W0426 11:30:40.147576    8357 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout
\ W0426 11:31:10.648151    8357 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout
- W0426 11:31:41.149691    8357 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout
/ W0426 11:32:11.649446    8357 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout
/ W0426 11:32:42.148494    8357 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout
\ W0426 11:33:12.149767    8357 kapi.go:226] failed getting deployment scale, will retry: Get "https://192.168.49.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale": dial tcp 192.168.49.2:8443: i/o timeout
I0426 11:33:12.149822    8357 kapi.go:241] timed out trying to rescale deployment "coredns" in namespace "kube-system" and context "minikube" to 1: timed out waiting for the condition
E0426 11:33:12.149858    8357 start.go:131] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: timed out waiting for the condition
I0426 11:33:12.149902    8357 start.go:200] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
I0426 11:33:12.149984    8357 addons.go:328] enableAddons start: toEnable=map[], additional=[]
I0426 11:33:12.156831    8357 out.go:157] πŸ”Ž  Verifying Kubernetes components...

πŸ”Ž  Verifying Kubernetes components...
I0426 11:33:12.156928    8357 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
I0426 11:33:12.150046    8357 addons.go:55] Setting storage-provisioner=true in profile "minikube"
I0426 11:33:12.156972    8357 addons.go:131] Setting addon storage-provisioner=true in "minikube"
W0426 11:33:12.156984    8357 addons.go:140] addon storage-provisioner should already be in state true
I0426 11:33:12.156997    8357 host.go:66] Checking if "minikube" exists ...
I0426 11:33:12.150055    8357 addons.go:55] Setting default-storageclass=true in profile "minikube"
I0426 11:33:12.157057    8357 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I0426 11:33:12.157252    8357 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0426 11:33:12.157321    8357 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0426 11:33:12.168589    8357 api_server.go:48] waiting for apiserver process to appear ...
I0426 11:33:12.168687    8357 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0426 11:33:12.191680    8357 api_server.go:68] duration metric: took 41.7377ms to wait for apiserver process to appear ...
I0426 11:33:12.191717    8357 api_server.go:84] waiting for apiserver healthz status ...
I0426 11:33:12.191746    8357 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0426 11:33:12.322325    8357 kapi.go:59] client config for minikube: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jervan/.minikube/profiles/minikube/client.crt", KeyFile:"/home/jervan/.minikube/profiles/minikube/client.key", CAFile:"/home/jervan/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1a71760), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0426 11:33:12.327782    8357 out.go:157]     β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
    β–ͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I0426 11:33:12.327995    8357 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
I0426 11:33:12.328034    8357 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I0426 11:33:12.328137    8357 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0426 11:33:12.475498    8357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/jervan/.minikube/machines/minikube/id_rsa Username:docker}
I0426 11:33:12.521853    8357 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
W0426 11:33:42.324584    8357 out.go:222] ❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout]
❗  Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.168.49.2:8443: i/o timeout]
I0426 11:33:42.331483    8357 out.go:157] 🌟  Enabled addons: storage-provisioner
🌟  Enabled addons: storage-provisioner
I0426 11:33:42.331537    8357 addons.go:330] enableAddons completed in 30.1815626s
I0426 11:35:23.435177    8357 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection timed out
I0426 11:35:23.936266    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0426 11:35:23.971534    8357 logs.go:256] 1 containers: [c67507cb2726]
I0426 11:35:23.971619    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0426 11:35:24.000718    8357 logs.go:256] 1 containers: [aac913e93cc2]
I0426 11:35:24.000804    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0426 11:35:24.031372    8357 logs.go:256] 2 containers: [0bbd80e5a1d7 c7f18c2ce192]
I0426 11:35:24.031464    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0426 11:35:24.060987    8357 logs.go:256] 1 containers: [64e90c5f5e85]
I0426 11:35:24.061100    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0426 11:35:24.089985    8357 logs.go:256] 1 containers: [cdeba80a861b]
I0426 11:35:24.090080    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0426 11:35:24.119243    8357 logs.go:256] 0 containers: []
W0426 11:35:24.119285    8357 logs.go:258] No container was found matching "kubernetes-dashboard"
I0426 11:35:24.119354    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0426 11:35:24.148449    8357 logs.go:256] 1 containers: [6fc8e75d2ddf]
I0426 11:35:24.148534    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0426 11:35:24.177108    8357 logs.go:256] 1 containers: [985a83aaea6c]
I0426 11:35:24.177172    8357 logs.go:122] Gathering logs for kubelet ...
I0426 11:35:24.177185    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0426 11:35:24.226960    8357 logs.go:122] Gathering logs for describe nodes ...
I0426 11:35:24.227020    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0426 11:35:24.300255    8357 logs.go:122] Gathering logs for etcd [aac913e93cc2] ...
I0426 11:35:24.300302    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 aac913e93cc2"
I0426 11:35:24.331718    8357 logs.go:122] Gathering logs for kube-scheduler [64e90c5f5e85] ...
I0426 11:35:24.331765    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 64e90c5f5e85"
I0426 11:35:24.363487    8357 logs.go:122] Gathering logs for kube-controller-manager [985a83aaea6c] ...
I0426 11:35:24.363533    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 985a83aaea6c"
I0426 11:35:24.404064    8357 logs.go:122] Gathering logs for container status ...
I0426 11:35:24.404110    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0426 11:35:24.420459    8357 logs.go:122] Gathering logs for dmesg ...
I0426 11:35:24.420500    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0426 11:35:24.435010    8357 logs.go:122] Gathering logs for kube-apiserver [c67507cb2726] ...
I0426 11:35:24.435055    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c67507cb2726"
I0426 11:35:24.475892    8357 logs.go:122] Gathering logs for coredns [0bbd80e5a1d7] ...
I0426 11:35:24.475951    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 0bbd80e5a1d7"
I0426 11:35:24.505012    8357 logs.go:122] Gathering logs for coredns [c7f18c2ce192] ...
I0426 11:35:24.505058    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c7f18c2ce192"
I0426 11:35:24.534036    8357 logs.go:122] Gathering logs for kube-proxy [cdeba80a861b] ...
I0426 11:35:24.534081    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cdeba80a861b"
I0426 11:35:24.564577    8357 logs.go:122] Gathering logs for storage-provisioner [6fc8e75d2ddf] ...
I0426 11:35:24.564622    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 6fc8e75d2ddf"
I0426 11:35:24.594645    8357 logs.go:122] Gathering logs for Docker ...
I0426 11:35:24.594691    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0426 11:35:27.112165    8357 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0426 11:37:36.555358    8357 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection timed out
I0426 11:37:36.936408    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
I0426 11:37:36.966993    8357 logs.go:256] 1 containers: [c67507cb2726]
I0426 11:37:36.967097    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
I0426 11:37:36.994045    8357 logs.go:256] 1 containers: [aac913e93cc2]
I0426 11:37:36.994128    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
I0426 11:37:37.022063    8357 logs.go:256] 2 containers: [0bbd80e5a1d7 c7f18c2ce192]
I0426 11:37:37.022157    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
I0426 11:37:37.049679    8357 logs.go:256] 1 containers: [64e90c5f5e85]
I0426 11:37:37.049763    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
I0426 11:37:37.077015    8357 logs.go:256] 1 containers: [cdeba80a861b]
I0426 11:37:37.077121    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
I0426 11:37:37.106340    8357 logs.go:256] 0 containers: []
W0426 11:37:37.106381    8357 logs.go:258] No container was found matching "kubernetes-dashboard"
I0426 11:37:37.106437    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
I0426 11:37:37.134375    8357 logs.go:256] 1 containers: [6fc8e75d2ddf]
I0426 11:37:37.134455    8357 ssh_runner.go:149] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
I0426 11:37:37.162347    8357 logs.go:256] 1 containers: [985a83aaea6c]
I0426 11:37:37.162396    8357 logs.go:122] Gathering logs for coredns [c7f18c2ce192] ...
I0426 11:37:37.162429    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c7f18c2ce192"
I0426 11:37:37.191057    8357 logs.go:122] Gathering logs for kube-scheduler [64e90c5f5e85] ...
I0426 11:37:37.191102    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 64e90c5f5e85"
I0426 11:37:37.222778    8357 logs.go:122] Gathering logs for storage-provisioner [6fc8e75d2ddf] ...
I0426 11:37:37.222830    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 6fc8e75d2ddf"
I0426 11:37:37.251669    8357 logs.go:122] Gathering logs for kube-controller-manager [985a83aaea6c] ...
I0426 11:37:37.251715    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 985a83aaea6c"
I0426 11:37:37.292770    8357 logs.go:122] Gathering logs for Docker ...
I0426 11:37:37.292832    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
I0426 11:37:37.310257    8357 logs.go:122] Gathering logs for kubelet ...
I0426 11:37:37.310308    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I0426 11:37:37.366154    8357 logs.go:122] Gathering logs for kube-apiserver [c67507cb2726] ...
I0426 11:37:37.366216    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 c67507cb2726"
I0426 11:37:37.405380    8357 logs.go:122] Gathering logs for coredns [0bbd80e5a1d7] ...
I0426 11:37:37.405436    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 0bbd80e5a1d7"
I0426 11:37:37.435875    8357 logs.go:122] Gathering logs for container status ...
I0426 11:37:37.435920    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I0426 11:37:37.452319    8357 logs.go:122] Gathering logs for kube-proxy [cdeba80a861b] ...
I0426 11:37:37.452365    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 cdeba80a861b"
I0426 11:37:37.482974    8357 logs.go:122] Gathering logs for dmesg ...
I0426 11:37:37.483034    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I0426 11:37:37.498058    8357 logs.go:122] Gathering logs for describe nodes ...
I0426 11:37:37.498104    8357 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I0426 11:37:37.750455    8357 logs.go:122] Gathering logs for etcd [aac913e93cc2] ...
I0426 11:37:37.750511    8357 ssh_runner.go:149] Run: /bin/bash -c "docker logs --tail 400 aac913e93cc2"
I0426 11:37:40.284275    8357 api_server.go:221] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0426 11:39:49.675191    8357 api_server.go:231] stopped: https://192.168.49.2:8443/healthz: Get "https://192.168.49.2:8443/healthz": dial tcp 192.168.49.2:8443: connect: connection timed out
I0426 11:39:49.681888    8357 out.go:157]

W0426 11:39:49.682079    8357 out.go:222] ❌  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
❌  Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
W0426 11:39:49.682128    8357 out.go:222]

W0426 11:39:49.682180    8357 out.go:222] 😿  If the above advice does not help, please let us know:
😿  If the above advice does not help, please let us know:
W0426 11:39:49.682230    8357 out.go:222] πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
πŸ‘‰  https://github.com/kubernetes/minikube/issues/new/choose
I0426 11:39:49.684313    8357 out.go:157]