minikube: 'minikube start --container-runtime=cri-o --wait=all' fails: time out waiting for condition - v1.24.0
Minikube Version: v1.24.0 System: Linux. Ubuntu 20.04, Fedora 33
$ minikube start --container-runtime=cri-o --wait=all
π minikube v1.24.0 on Fedora 33
β¨ Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh
π Starting control plane node minikube in cluster minikube
π Pulling base image ...
π₯ Creating docker container (CPUs=2, Memory=7900MB) ...
π Preparing Kubernetes v1.22.3 on CRI-O 1.22.0 ...
βͺ Generating certificates and keys ...
βͺ Booting up control plane ...
βͺ Configuring RBAC rules ...
π Configuring CNI (Container Networking Interface) ...
π Verifying Kubernetes components...
βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
π Enabled addons: storage-provisioner, default-storageclass
E1111 00:44:02.969602 904529 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
Running kubectl get pod -A from another terminal:
$ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcd69978-rnpww 0/1 ContainerCreating 0 5m55s
kube-system etcd-minikube 1/1 Running 0 6m8s
kube-system kindnet-6848p 1/1 Running 2 (31s ago) 5m55s
kube-system kube-apiserver-minikube 1/1 Running 0 6m1s
kube-system kube-controller-manager-minikube 1/1 Running 0 6m9s
kube-system kube-proxy-vv4gk 0/1 CreateContainerError 0 5m55s
kube-system kube-scheduler-minikube 1/1 Running 0 6m1s
kube-system storage-provisioner 1/1 Running 5 (114s ago) 5m53s
Output from minikube logs:
*
* ==> Audit <==
* |---------|--------------------------------|----------|------|---------|-------------------------------|-------------------------------|
| Command | Args | Profile | User | Version | Start Time | End Time |
|---------|--------------------------------|----------|------|---------|-------------------------------|-------------------------------|
| delete | | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 20:50:58 PST | Mon, 08 Nov 2021 20:51:00 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 20:51:20 PST | Mon, 08 Nov 2021 20:51:57 PST |
| delete | | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:07:09 PST | Mon, 08 Nov 2021 21:07:11 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:08:34 PST | Mon, 08 Nov 2021 21:09:10 PST |
| delete | | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:11:54 PST | Mon, 08 Nov 2021 21:11:57 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:12:03 PST | Mon, 08 Nov 2021 21:12:44 PST |
| --help | | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:34:06 PST | Mon, 08 Nov 2021 21:34:06 PST |
| start | --help | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:34:17 PST | Mon, 08 Nov 2021 21:34:17 PST |
| delete | | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:40:17 PST | Mon, 08 Nov 2021 21:40:19 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Mon, 08 Nov 2021 21:42:26 PST | Mon, 08 Nov 2021 21:43:05 PST |
| profile | list | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 09:45:46 PST | Tue, 09 Nov 2021 09:45:47 PST |
| delete | | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 12:31:19 PST | Tue, 09 Nov 2021 12:31:21 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 12:31:35 PST | Tue, 09 Nov 2021 12:32:17 PST |
| delete | | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 13:58:24 PST | Tue, 09 Nov 2021 13:58:27 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 13:59:00 PST | Tue, 09 Nov 2021 13:59:42 PST |
| delete | | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 18:29:51 PST | Tue, 09 Nov 2021 18:29:53 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 18:45:34 PST | Tue, 09 Nov 2021 18:46:09 PST |
| delete | | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 20:03:02 PST | Tue, 09 Nov 2021 20:03:05 PST |
| start | container-runtime=containerd | minikube | muth | v1.23.0 | Tue, 09 Nov 2021 20:03:37 PST | Tue, 09 Nov 2021 20:03:58 PST |
| logs | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 00:15:13 PST | Wed, 10 Nov 2021 00:16:16 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 00:17:14 PST | Wed, 10 Nov 2021 00:17:16 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 01:12:54 PST | Wed, 10 Nov 2021 01:12:54 PST |
| start | 00container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 01:15:21 PST | Wed, 10 Nov 2021 01:15:46 PST |
| logs | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 01:17:39 PST | Wed, 10 Nov 2021 01:17:40 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 01:18:51 PST | Wed, 10 Nov 2021 01:18:54 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 01:19:43 PST | Wed, 10 Nov 2021 01:20:19 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 10:03:56 PST | Wed, 10 Nov 2021 10:03:59 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 10:04:21 PST | Wed, 10 Nov 2021 10:04:46 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 10:14:03 PST | Wed, 10 Nov 2021 10:14:05 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 10:14:52 PST | Wed, 10 Nov 2021 10:15:28 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 17:58:37 PST | Wed, 10 Nov 2021 17:58:40 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 17:59:14 PST | Wed, 10 Nov 2021 17:59:52 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 18:02:48 PST | Wed, 10 Nov 2021 18:02:51 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 18:04:22 PST | Wed, 10 Nov 2021 18:04:59 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:07:31 PST | Wed, 10 Nov 2021 19:07:34 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:08:07 PST | Wed, 10 Nov 2021 19:08:43 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:25:29 PST | Wed, 10 Nov 2021 19:25:31 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:25:35 PST | Wed, 10 Nov 2021 19:26:11 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:28:44 PST | Wed, 10 Nov 2021 19:28:47 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:28:55 PST | Wed, 10 Nov 2021 19:29:34 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:34:49 PST | Wed, 10 Nov 2021 19:34:52 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:34:59 PST | Wed, 10 Nov 2021 19:35:36 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:48:33 PST | Wed, 10 Nov 2021 19:48:36 PST |
| start | --container-runtime=cri-o | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:48:54 PST | Wed, 10 Nov 2021 19:49:21 PST |
| logs | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:52:36 PST | Wed, 10 Nov 2021 19:52:55 PST |
| logs | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 19:56:25 PST | Wed, 10 Nov 2021 19:56:30 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:01:16 PST | Wed, 10 Nov 2021 20:01:18 PST |
| start | --container-runtime=cri-o | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:01:26 PST | Wed, 10 Nov 2021 20:01:58 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:03:02 PST | Wed, 10 Nov 2021 20:03:05 PST |
| start | --container-runtime=containerd | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:03:12 PST | Wed, 10 Nov 2021 20:03:53 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:04:15 PST | Wed, 10 Nov 2021 20:04:17 PST |
| start | --container-runtime=cri-o | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:04:23 PST | Wed, 10 Nov 2021 20:04:49 PST |
| delete | | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:08:08 PST | Wed, 10 Nov 2021 20:08:11 PST |
| start | --container-runtime=cri-o | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 20:09:20 PST | Wed, 10 Nov 2021 20:09:49 PST |
| start | --help | minikube | muth | v1.23.0 | Wed, 10 Nov 2021 22:46:44 PST | Wed, 10 Nov 2021 22:46:44 PST |
| delete | | minikube | muth | v1.24.0 | Thu, 11 Nov 2021 00:27:04 PST | Thu, 11 Nov 2021 00:27:07 PST |
| start | --container-runtime=cri-o | minikube | muth | v1.24.0 | Thu, 11 Nov 2021 00:27:31 PST | Thu, 11 Nov 2021 00:29:35 PST |
| logs | | minikube | muth | v1.24.0 | Thu, 11 Nov 2021 00:31:44 PST | Thu, 11 Nov 2021 00:31:44 PST |
| logs | | minikube | muth | v1.24.0 | Thu, 11 Nov 2021 00:33:30 PST | Thu, 11 Nov 2021 00:33:31 PST |
| delete | | minikube | muth | v1.24.0 | Thu, 11 Nov 2021 00:39:11 PST | Thu, 11 Nov 2021 00:39:14 PST |
|---------|--------------------------------|----------|------|---------|-------------------------------|-------------------------------|
*
* ==> Last Start <==
* Log file created at: 2021/11/11 00:39:23
Running on machine: muth-workstation
Binary: Built with gc go1.16.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1111 00:39:23.454896 904529 out.go:297] Setting OutFile to fd 1 ...
I1111 00:39:23.454978 904529 out.go:349] isatty.IsTerminal(1) = true
I1111 00:39:23.454982 904529 out.go:310] Setting ErrFile to fd 2...
I1111 00:39:23.454986 904529 out.go:349] isatty.IsTerminal(2) = true
I1111 00:39:23.455113 904529 root.go:313] Updating PATH: /home/muth/.minikube/bin
I1111 00:39:23.455401 904529 out.go:304] Setting JSON to false
I1111 00:39:23.477816 904529 start.go:112] hostinfo: {"hostname":"muth-workstation","uptime":1835562,"bootTime":1634784401,"procs":526,"os":"linux","platform":"fedora","platformFamily":"fedora","platformVersion":"33","kernelVersion":"5.13.12-100.fc33.x86_64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"833e81a8-3948-4a50-a735-b6a1d661335e"}
I1111 00:39:23.478015 904529 start.go:122] virtualization: kvm host
I1111 00:39:23.480581 904529 out.go:176] π minikube v1.24.0 on Fedora 33
I1111 00:39:23.480927 904529 notify.go:174] Checking for updates...
I1111 00:39:23.481124 904529 driver.go:343] Setting default libvirt URI to qemu:///system
I1111 00:39:23.481166 904529 global.go:111] Querying for installed drivers using PATH=/home/muth/.minikube/bin:/home/muth/.local/bin:/home/muth/bin:/usr/share/Modules/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/usr/local/go/bin:/usr/local/kubebuilder/bin:/home/muth/private/go/bin
I1111 00:39:23.481182 904529 global.go:119] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1111 00:39:23.811680 904529 global.go:119] virtualbox default: true priority: 6, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1111 00:39:23.811872 904529 global.go:119] vmware default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Reason: Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I1111 00:39:23.883711 904529 docker.go:132] docker version: linux-19.03.13
I1111 00:39:23.883807 904529 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1111 00:39:23.938152 904529 info.go:263] docker info: {ID:53FF:CEBX:73CK:EYFY:FLGJ:GV7V:7T6Z:PUL4:KHVS:7BCK:SQXN:C7ED Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:391 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-11 00:39:23.914396305 -0800 PST LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:5.13.12-100.fc33.x86_64 OperatingSystem:Fedora 33 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33495408640 GenericResources:<nil> DockerRootDir:/home/docker HTTPProxy: HTTPSProxy: NoProxy: Name:muth-workstation Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:true Isolation: InitBinary:/usr/libexec/docker/docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:448f591 Expected:448f591} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/home/muth/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3]] Warnings:<nil>}}
I1111 00:39:23.938211 904529 docker.go:237] overlay module found
I1111 00:39:23.938216 904529 global.go:119] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1111 00:39:23.995309 904529 global.go:119] kvm2 default: true priority: 8, state: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1111 00:39:24.002237 904529 global.go:119] none default: false priority: 4, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:running the 'none' driver as a regular user requires sudo permissions Reason: Fix: Doc:}
W1111 00:39:24.008470 904529 podman.go:136] podman returned error: exit status 1
I1111 00:39:24.008491 904529 global.go:119] podman default: true priority: 7, state: {Installed:true Healthy:false Running:false NeedsImprovement:false Error:"sudo -k -n podman version --format {{.Version}}" exit status 1: sudo: a password is required Reason: Fix:Add your user to the 'sudoers' file: 'muth ALL=(ALL) NOPASSWD: /usr/bin/podman' Doc:https://podman.io}
I1111 00:39:24.008504 904529 driver.go:278] not recommending "ssh" due to default: false
I1111 00:39:24.008508 904529 driver.go:273] not recommending "podman" due to health: "sudo -k -n podman version --format {{.Version}}" exit status 1: sudo: a password is required
I1111 00:39:24.008516 904529 driver.go:313] Picked: docker
I1111 00:39:24.008519 904529 driver.go:314] Alternatives: [kvm2 virtualbox ssh]
I1111 00:39:24.008526 904529 driver.go:315] Rejects: [vmware none podman]
I1111 00:39:24.009442 904529 out.go:176] β¨ Automatically selected the docker driver. Other choices: kvm2, virtualbox, ssh
I1111 00:39:24.009457 904529 start.go:280] selected driver: docker
I1111 00:39:24.009460 904529 start.go:762] validating driver "docker" against <nil>
I1111 00:39:24.009470 904529 start.go:773] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I1111 00:39:24.009518 904529 cli_runner.go:115] Run: docker system info --format "{{json .}}"
I1111 00:39:24.065651 904529 info.go:263] docker info: {ID:53FF:CEBX:73CK:EYFY:FLGJ:GV7V:7T6Z:PUL4:KHVS:7BCK:SQXN:C7ED Containers:5 ContainersRunning:1 ContainersPaused:0 ContainersStopped:4 Images:391 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2021-11-11 00:39:24.040293361 -0800 PST LoggingDriver:journald CgroupDriver:systemd NEventsListener:0 KernelVersion:5.13.12-100.fc33.x86_64 OperatingSystem:Fedora 33 (Workstation Edition) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:12 MemTotal:33495408640 GenericResources:<nil> DockerRootDir:/home/docker HTTPProxy: HTTPSProxy: NoProxy: Name:muth-workstation Labels:[] ExperimentalBuild:false ServerVersion:19.03.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:true Isolation: InitBinary:/usr/libexec/docker/docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:448f591 Expected:448f591} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/home/muth/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3]] Warnings:<nil>}}
I1111 00:39:24.065708 904529 start_flags.go:268] no existing cluster config was found, will generate one from the flags
I1111 00:39:24.078228 904529 start_flags.go:349] Using suggested 7900MB memory alloc based on sys=31943MB, container=31943MB
I1111 00:39:24.078303 904529 start_flags.go:754] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1111 00:39:24.078314 904529 cni.go:93] Creating CNI manager for ""
I1111 00:39:24.078319 904529 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
I1111 00:39:24.078326 904529 start_flags.go:277] Found "CNI" CNI - setting NetworkPlugin=cni
I1111 00:39:24.078333 904529 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/muth:/minikube-host}
I1111 00:39:24.079411 904529 out.go:176] π Starting control plane node minikube in cluster minikube
I1111 00:39:24.079432 904529 cache.go:118] Beginning downloading kic base image for docker with crio
I1111 00:39:24.080206 904529 out.go:176] π Pulling base image ...
I1111 00:39:24.080225 904529 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime crio
I1111 00:39:24.080242 904529 preload.go:148] Found local preload: /home/muth/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-cri-o-overlay-amd64.tar.lz4
I1111 00:39:24.080246 904529 cache.go:57] Caching tarball of preloaded images
I1111 00:39:24.080287 904529 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon
I1111 00:39:24.080377 904529 preload.go:174] Found /home/muth/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
I1111 00:39:24.080387 904529 cache.go:60] Finished verifying existence of preloaded tar for v1.22.3 on crio
I1111 00:39:24.080591 904529 profile.go:147] Saving config to /home/muth/.minikube/profiles/minikube/config.json ...
I1111 00:39:24.080603 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/config.json: {Name:mk80e6e753e531524e92f228b03612930a8af5f8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:24.163698 904529 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local docker daemon, skipping pull
I1111 00:39:24.163707 904529 cache.go:140] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in daemon, skipping load
I1111 00:39:24.163711 904529 cache.go:206] Successfully downloaded all kic artifacts
I1111 00:39:24.163733 904529 start.go:313] acquiring machines lock for minikube: {Name:mkec2f7035e44e8076c14aa815d37c51a7eb0008 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1111 00:39:24.163783 904529 start.go:317] acquired machines lock for "minikube" in 40.638Β΅s
I1111 00:39:24.163792 904529 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/muth:/minikube-host} &{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I1111 00:39:24.163844 904529 start.go:126] createHost starting for "" (driver="docker")
I1111 00:39:24.165193 904529 out.go:203] π₯ Creating docker container (CPUs=2, Memory=7900MB) ...
I1111 00:39:24.165350 904529 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I1111 00:39:24.165360 904529 client.go:168] LocalClient.Create starting
I1111 00:39:24.165387 904529 main.go:130] libmachine: Reading certificate data from /home/muth/.minikube/certs/ca.pem
I1111 00:39:24.165404 904529 main.go:130] libmachine: Decoding PEM data...
I1111 00:39:24.165412 904529 main.go:130] libmachine: Parsing certificate...
I1111 00:39:24.165476 904529 main.go:130] libmachine: Reading certificate data from /home/muth/.minikube/certs/cert.pem
I1111 00:39:24.165487 904529 main.go:130] libmachine: Decoding PEM data...
I1111 00:39:24.165495 904529 main.go:130] libmachine: Parsing certificate...
I1111 00:39:24.165701 904529 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1111 00:39:24.194828 904529 cli_runner.go:162] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1111 00:39:24.194863 904529 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs...
I1111 00:39:24.194872 904529 cli_runner.go:115] Run: docker network inspect minikube
W1111 00:39:24.223970 904529 cli_runner.go:162] docker network inspect minikube returned with exit code 1
I1111 00:39:24.224270 904529 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]
stderr:
Error: No such network: minikube
I1111 00:39:24.224294 904529 network_create.go:259] output of [docker network inspect minikube]: -- stdout --
[]
-- /stdout --
** stderr **
Error: No such network: minikube
** /stderr **
I1111 00:39:24.224360 904529 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1111 00:39:24.254648 904529 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0006a8090] misses:0}
I1111 00:39:24.254671 904529 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I1111 00:39:24.254680 904529 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1111 00:39:24.254711 904529 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I1111 00:39:24.403143 904529 network_create.go:90] docker network minikube 192.168.49.0/24 created
I1111 00:39:24.403153 904529 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I1111 00:39:24.403190 904529 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
I1111 00:39:24.432859 904529 cli_runner.go:115] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I1111 00:39:24.462023 904529 oci.go:102] Successfully created a docker volume minikube
I1111 00:39:24.462065 904529 cli_runner.go:115] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -d /var/lib
I1111 00:39:24.828514 904529 oci.go:106] Successfully prepared a docker volume minikube
W1111 00:39:24.828562 904529 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
I1111 00:39:24.828571 904529 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime crio
W1111 00:39:24.828571 904529 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I1111 00:39:24.828588 904529 kic.go:179] Starting extracting preloaded images to volume ...
I1111 00:39:24.828628 904529 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
I1111 00:39:24.828641 904529 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/muth/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir
I1111 00:39:24.904443 904529 cli_runner.go:115] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c
I1111 00:39:25.268063 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Running}}
I1111 00:39:25.305806 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1111 00:39:25.338052 904529 cli_runner.go:115] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I1111 00:39:25.416581 904529 oci.go:281] the created container "minikube" has a running status.
I1111 00:39:25.416591 904529 kic.go:210] Creating ssh key for kic: /home/muth/.minikube/machines/minikube/id_rsa...
I1111 00:39:25.565849 904529 kic_runner.go:187] docker (temp): /home/muth/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I1111 00:39:25.631411 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1111 00:39:25.665091 904529 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I1111 00:39:25.665098 904529 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I1111 00:39:27.789611 904529 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/muth/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v13-v1.22.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c -I lz4 -xf /preloaded.tar -C /extractDir: (2.960934393s)
I1111 00:39:27.789626 904529 kic.go:188] duration metric: took 2.961038 seconds to extract preloaded images to volume
I1111 00:39:27.789679 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1111 00:39:27.826216 904529 machine.go:88] provisioning docker machine ...
I1111 00:39:27.826234 904529 ubuntu.go:169] provisioning hostname "minikube"
I1111 00:39:27.826272 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:27.861981 904529 main.go:130] libmachine: Using SSH client type: native
I1111 00:39:27.862103 904529 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802cc0] 0x802c80 <nil> [] 0s} 127.0.0.1 32982 <nil> <nil>}
I1111 00:39:27.862110 904529 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1111 00:39:27.999562 904529 main.go:130] libmachine: SSH cmd err, output: <nil>: minikube
I1111 00:39:27.999646 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:28.045878 904529 main.go:130] libmachine: Using SSH client type: native
I1111 00:39:28.045974 904529 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802cc0] 0x802c80 <nil> [] 0s} 127.0.0.1 32982 <nil> <nil>}
I1111 00:39:28.045983 904529 main.go:130] libmachine: About to run SSH command:
if ! grep -xq '.*\sminikube' /etc/hosts; then
if grep -xq '127.0.1.1\s.*' /etc/hosts; then
sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
else
echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
fi
fi
I1111 00:39:28.151973 904529 main.go:130] libmachine: SSH cmd err, output: <nil>:
I1111 00:39:28.152015 904529 ubuntu.go:175] set auth options {CertDir:/home/muth/.minikube CaCertPath:/home/muth/.minikube/certs/ca.pem CaPrivateKeyPath:/home/muth/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/muth/.minikube/machines/server.pem ServerKeyPath:/home/muth/.minikube/machines/server-key.pem ClientKeyPath:/home/muth/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/muth/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/muth/.minikube}
I1111 00:39:28.152067 904529 ubuntu.go:177] setting up certificates
I1111 00:39:28.152090 904529 provision.go:83] configureAuth start
I1111 00:39:28.152205 904529 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1111 00:39:28.219165 904529 provision.go:138] copyHostCerts
I1111 00:39:28.219217 904529 exec_runner.go:144] found /home/muth/.minikube/key.pem, removing ...
I1111 00:39:28.219226 904529 exec_runner.go:207] rm: /home/muth/.minikube/key.pem
I1111 00:39:28.219280 904529 exec_runner.go:151] cp: /home/muth/.minikube/certs/key.pem --> /home/muth/.minikube/key.pem (1675 bytes)
I1111 00:39:28.219347 904529 exec_runner.go:144] found /home/muth/.minikube/ca.pem, removing ...
I1111 00:39:28.219350 904529 exec_runner.go:207] rm: /home/muth/.minikube/ca.pem
I1111 00:39:28.219376 904529 exec_runner.go:151] cp: /home/muth/.minikube/certs/ca.pem --> /home/muth/.minikube/ca.pem (1070 bytes)
I1111 00:39:28.219425 904529 exec_runner.go:144] found /home/muth/.minikube/cert.pem, removing ...
I1111 00:39:28.219428 904529 exec_runner.go:207] rm: /home/muth/.minikube/cert.pem
I1111 00:39:28.219451 904529 exec_runner.go:151] cp: /home/muth/.minikube/certs/cert.pem --> /home/muth/.minikube/cert.pem (1115 bytes)
I1111 00:39:28.219495 904529 provision.go:112] generating server cert: /home/muth/.minikube/machines/server.pem ca-key=/home/muth/.minikube/certs/ca.pem private-key=/home/muth/.minikube/certs/ca-key.pem org=muth.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube]
I1111 00:39:28.270227 904529 provision.go:172] copyRemoteCerts
I1111 00:39:28.270255 904529 ssh_runner.go:152] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I1111 00:39:28.270283 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:28.300325 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:28.393345 904529 ssh_runner.go:319] scp /home/muth/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1070 bytes)
I1111 00:39:28.414137 904529 ssh_runner.go:319] scp /home/muth/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
I1111 00:39:28.426161 904529 ssh_runner.go:319] scp /home/muth/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
I1111 00:39:28.438010 904529 provision.go:86] duration metric: configureAuth took 285.911903ms
I1111 00:39:28.438018 904529 ubuntu.go:193] setting minikube options for container-runtime
I1111 00:39:28.438109 904529 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.3
I1111 00:39:28.438176 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:28.468419 904529 main.go:130] libmachine: Using SSH client type: native
I1111 00:39:28.468508 904529 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x802cc0] 0x802c80 <nil> [] 0s} 127.0.0.1 32982 <nil> <nil>}
I1111 00:39:28.468517 904529 main.go:130] libmachine: About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
I1111 00:39:28.770663 904529 main.go:130] libmachine: SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
I1111 00:39:28.770675 904529 machine.go:91] provisioned docker machine in 944.45159ms
I1111 00:39:28.770681 904529 client.go:171] LocalClient.Create took 4.605317944s
I1111 00:39:28.770694 904529 start.go:168] duration metric: libmachine.API.Create for "minikube" took 4.605341541s
I1111 00:39:28.770701 904529 start.go:267] post-start starting for "minikube" (driver="docker")
I1111 00:39:28.770705 904529 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I1111 00:39:28.770758 904529 ssh_runner.go:152] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I1111 00:39:28.770805 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:28.812956 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:28.898612 904529 ssh_runner.go:152] Run: cat /etc/os-release
I1111 00:39:28.904557 904529 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I1111 00:39:28.904583 904529 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I1111 00:39:28.904599 904529 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I1111 00:39:28.904606 904529 info.go:137] Remote host: Ubuntu 20.04.2 LTS
I1111 00:39:28.904617 904529 filesync.go:126] Scanning /home/muth/.minikube/addons for local assets ...
I1111 00:39:28.904703 904529 filesync.go:126] Scanning /home/muth/.minikube/files for local assets ...
I1111 00:39:28.904745 904529 start.go:270] post-start completed in 134.037516ms
I1111 00:39:28.905279 904529 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1111 00:39:28.958011 904529 profile.go:147] Saving config to /home/muth/.minikube/profiles/minikube/config.json ...
I1111 00:39:28.958190 904529 ssh_runner.go:152] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I1111 00:39:28.958221 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:28.993068 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:29.071872 904529 start.go:129] duration metric: createHost completed in 4.90801248s
I1111 00:39:29.071897 904529 start.go:80] releasing machines lock for "minikube", held for 4.90810573s
I1111 00:39:29.072040 904529 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1111 00:39:29.135753 904529 ssh_runner.go:152] Run: systemctl --version
I1111 00:39:29.135783 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:29.135844 904529 ssh_runner.go:152] Run: curl -sS -m 2 https://k8s.gcr.io/
I1111 00:39:29.135879 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:29.168471 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:29.168471 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:29.421440 904529 ssh_runner.go:152] Run: sudo systemctl stop -f containerd
I1111 00:39:29.498022 904529 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service containerd
I1111 00:39:29.526513 904529 docker.go:156] disabling docker service ...
I1111 00:39:29.526605 904529 ssh_runner.go:152] Run: sudo systemctl stop -f docker.socket
I1111 00:39:29.553386 904529 ssh_runner.go:152] Run: sudo systemctl stop -f docker.service
I1111 00:39:29.576419 904529 ssh_runner.go:152] Run: sudo systemctl disable docker.socket
I1111 00:39:29.676615 904529 ssh_runner.go:152] Run: sudo systemctl mask docker.service
I1111 00:39:29.732251 904529 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service docker
I1111 00:39:29.739514 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
" | sudo tee /etc/crictl.yaml"
I1111 00:39:29.748540 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo sed -e 's|^pause_image = .*$|pause_image = "k8s.gcr.io/pause:3.5"|' -i /etc/crio/crio.conf"
I1111 00:39:29.754180 904529 crio.go:65] Updating CRIO to use the custom CNI network "kindnet"
I1111 00:39:29.754191 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo sed -e 's|^.*cni_default_network = .*$|cni_default_network = "kindnet"|' -i /etc/crio/crio.conf"
I1111 00:39:29.759708 904529 ssh_runner.go:152] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I1111 00:39:29.764291 904529 ssh_runner.go:152] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I1111 00:39:29.768651 904529 ssh_runner.go:152] Run: sudo systemctl daemon-reload
I1111 00:39:29.815987 904529 ssh_runner.go:152] Run: sudo systemctl start crio
I1111 00:39:29.823217 904529 start.go:403] Will wait 60s for socket path /var/run/crio/crio.sock
I1111 00:39:29.823251 904529 ssh_runner.go:152] Run: stat /var/run/crio/crio.sock
I1111 00:39:29.825363 904529 start.go:424] Will wait 60s for crictl version
I1111 00:39:29.825387 904529 ssh_runner.go:152] Run: sudo crictl version
I1111 00:39:29.840761 904529 start.go:433] Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.22.0
RuntimeApiVersion: v1alpha2
I1111 00:39:29.840804 904529 ssh_runner.go:152] Run: crio --version
I1111 00:39:29.860935 904529 ssh_runner.go:152] Run: crio --version
I1111 00:39:29.882181 904529 out.go:176] π Preparing Kubernetes v1.22.3 on CRI-O 1.22.0 ...
I1111 00:39:29.882230 904529 cli_runner.go:115] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1111 00:39:29.911449 904529 ssh_runner.go:152] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts
I1111 00:39:29.913656 904529 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1111 00:39:29.920044 904529 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime crio
I1111 00:39:29.920077 904529 ssh_runner.go:152] Run: sudo crictl images --output json
I1111 00:39:29.945395 904529 crio.go:461] all images are preloaded for cri-o runtime.
I1111 00:39:29.945401 904529 crio.go:370] Images already preloaded, skipping extraction
I1111 00:39:29.945428 904529 ssh_runner.go:152] Run: sudo crictl images --output json
I1111 00:39:29.960086 904529 crio.go:461] all images are preloaded for cri-o runtime.
I1111 00:39:29.960092 904529 cache_images.go:79] Images are preloaded, skipping loading
I1111 00:39:29.960129 904529 ssh_runner.go:152] Run: crio config
I1111 00:39:29.982214 904529 cni.go:93] Creating CNI manager for ""
I1111 00:39:29.982222 904529 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
I1111 00:39:29.982228 904529 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
I1111 00:39:29.982235 904529 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.22.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
I1111 00:39:29.982304 904529 kubeadm.go:157] kubeadm config:
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.49.2
bindPort: 8443
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
ttl: 24h0m0s
usages:
- signing
- authentication
nodeRegistration:
criSocket: /var/run/crio/crio.sock
name: "minikube"
kubeletExtraArgs:
node-ip: 192.168.49.2
taints: []
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
apiServer:
certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
extraArgs:
enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
controllerManager:
extraArgs:
allocate-node-cidrs: "true"
leader-elect: "false"
scheduler:
extraArgs:
leader-elect: "false"
certificatesDir: /var/lib/minikube/certs
clusterName: mk
controlPlaneEndpoint: control-plane.minikube.internal:8443
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/minikube/etcd
extraArgs:
proxy-refresh-interval: "70000"
kubernetesVersion: v1.22.3
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
x509:
clientCAFile: /var/lib/minikube/certs/ca.crt
cgroupDriver: systemd
clusterDomain: "cluster.local"
# disable disk resource management by default
imageGCHighThresholdPercent: 100
evictionHard:
nodefs.available: "0%!"(MISSING)
nodefs.inodesFree: "0%!"(MISSING)
imagefs.available: "0%!"(MISSING)
failSwapOn: false
staticPodPath: /etc/kubernetes/manifests
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
clusterCIDR: "10.244.0.0/16"
metricsBindAddress: 0.0.0.0:10249
conntrack:
maxPerCore: 0
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
tcpEstablishedTimeout: 0s
# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
tcpCloseWaitTimeout: 0s
I1111 00:39:29.982352 904529 kubeadm.go:909] kubelet [Unit]
Wants=crio.service
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.22.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=minikube --image-service-endpoint=/var/run/crio/crio.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.49.2 --runtime-request-timeout=15m
[Install]
config:
{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
I1111 00:39:29.982383 904529 ssh_runner.go:152] Run: sudo ls /var/lib/minikube/binaries/v1.22.3
I1111 00:39:29.987479 904529 binaries.go:44] Found k8s binaries, skipping transfer
I1111 00:39:29.987512 904529 ssh_runner.go:152] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
I1111 00:39:29.992462 904529 ssh_runner.go:319] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (538 bytes)
I1111 00:39:30.001078 904529 ssh_runner.go:319] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I1111 00:39:30.010035 904529 ssh_runner.go:319] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2049 bytes)
I1111 00:39:30.019086 904529 ssh_runner.go:152] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts
I1111 00:39:30.021097 904529 ssh_runner.go:152] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I1111 00:39:30.027481 904529 certs.go:54] Setting up /home/muth/.minikube/profiles/minikube for IP: 192.168.49.2
I1111 00:39:30.027575 904529 certs.go:182] skipping minikubeCA CA generation: /home/muth/.minikube/ca.key
I1111 00:39:30.027614 904529 certs.go:182] skipping proxyClientCA CA generation: /home/muth/.minikube/proxy-client-ca.key
I1111 00:39:30.027639 904529 certs.go:302] generating minikube-user signed cert: /home/muth/.minikube/profiles/minikube/client.key
I1111 00:39:30.027645 904529 crypto.go:68] Generating cert /home/muth/.minikube/profiles/minikube/client.crt with IP's: []
I1111 00:39:30.195503 904529 crypto.go:156] Writing cert to /home/muth/.minikube/profiles/minikube/client.crt ...
I1111 00:39:30.195512 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/client.crt: {Name:mkb59b1eeb9516aa11e03e0366a24e7d4bc5c97e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:30.195620 904529 crypto.go:164] Writing key to /home/muth/.minikube/profiles/minikube/client.key ...
I1111 00:39:30.195624 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/client.key: {Name:mk29335d709a47242507f8bc533f1efbec039be2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:30.195677 904529 certs.go:302] generating minikube signed cert: /home/muth/.minikube/profiles/minikube/apiserver.key.dd3b5fb2
I1111 00:39:30.195683 904529 crypto.go:68] Generating cert /home/muth/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
I1111 00:39:30.345311 904529 crypto.go:156] Writing cert to /home/muth/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ...
I1111 00:39:30.345322 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkaa8a5cde59bfc6e3190481ce1178cd32f41e63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:30.345433 904529 crypto.go:164] Writing key to /home/muth/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ...
I1111 00:39:30.345437 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mka599062516a47c76453d3eba5f54b00e96a0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:30.345492 904529 certs.go:320] copying /home/muth/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/muth/.minikube/profiles/minikube/apiserver.crt
I1111 00:39:30.345530 904529 certs.go:324] copying /home/muth/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/muth/.minikube/profiles/minikube/apiserver.key
I1111 00:39:30.345569 904529 certs.go:302] generating aggregator signed cert: /home/muth/.minikube/profiles/minikube/proxy-client.key
I1111 00:39:30.345575 904529 crypto.go:68] Generating cert /home/muth/.minikube/profiles/minikube/proxy-client.crt with IP's: []
I1111 00:39:30.488460 904529 crypto.go:156] Writing cert to /home/muth/.minikube/profiles/minikube/proxy-client.crt ...
I1111 00:39:30.488469 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/proxy-client.crt: {Name:mkc62983bc4fc32b2eca6953b5a98ee4cd369b06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:30.488581 904529 crypto.go:164] Writing key to /home/muth/.minikube/profiles/minikube/proxy-client.key ...
I1111 00:39:30.488585 904529 lock.go:35] WriteFile acquiring /home/muth/.minikube/profiles/minikube/proxy-client.key: {Name:mk3971926b38af74047a5eb845f9a19dba77734e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:30.488698 904529 certs.go:388] found cert: /home/muth/.minikube/certs/home/muth/.minikube/certs/ca-key.pem (1679 bytes)
I1111 00:39:30.488718 904529 certs.go:388] found cert: /home/muth/.minikube/certs/home/muth/.minikube/certs/ca.pem (1070 bytes)
I1111 00:39:30.488748 904529 certs.go:388] found cert: /home/muth/.minikube/certs/home/muth/.minikube/certs/cert.pem (1115 bytes)
I1111 00:39:30.488761 904529 certs.go:388] found cert: /home/muth/.minikube/certs/home/muth/.minikube/certs/key.pem (1675 bytes)
I1111 00:39:30.489313 904529 ssh_runner.go:319] scp /home/muth/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
I1111 00:39:30.502054 904529 ssh_runner.go:319] scp /home/muth/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
I1111 00:39:30.514205 904529 ssh_runner.go:319] scp /home/muth/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I1111 00:39:30.526555 904529 ssh_runner.go:319] scp /home/muth/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I1111 00:39:30.538794 904529 ssh_runner.go:319] scp /home/muth/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I1111 00:39:30.552765 904529 ssh_runner.go:319] scp /home/muth/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
I1111 00:39:30.569945 904529 ssh_runner.go:319] scp /home/muth/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I1111 00:39:30.591891 904529 ssh_runner.go:319] scp /home/muth/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
I1111 00:39:30.618570 904529 ssh_runner.go:319] scp /home/muth/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I1111 00:39:30.645474 904529 ssh_runner.go:319] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
I1111 00:39:30.664849 904529 ssh_runner.go:152] Run: openssl version
I1111 00:39:30.671867 904529 ssh_runner.go:152] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I1111 00:39:30.683058 904529 ssh_runner.go:152] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I1111 00:39:30.687689 904529 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Feb 11 2021 /usr/share/ca-certificates/minikubeCA.pem
I1111 00:39:30.687736 904529 ssh_runner.go:152] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I1111 00:39:30.694632 904529 ssh_runner.go:152] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I1111 00:39:30.705702 904529 kubeadm.go:390] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/muth:/minikube-host}
I1111 00:39:30.705780 904529 cri.go:41] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
I1111 00:39:30.705832 904529 ssh_runner.go:152] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
I1111 00:39:30.738722 904529 cri.go:76] found id: ""
I1111 00:39:30.738778 904529 ssh_runner.go:152] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
I1111 00:39:30.750651 904529 ssh_runner.go:152] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
I1111 00:39:30.761865 904529 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
I1111 00:39:30.761916 904529 ssh_runner.go:152] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
I1111 00:39:30.772557 904529 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
stdout:
stderr:
ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
I1111 00:39:30.772593 904529 ssh_runner.go:243] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.22.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
I1111 00:39:46.760409 904529 out.go:203] βͺ Generating certificates and keys ...
I1111 00:39:46.767413 904529 out.go:203] βͺ Booting up control plane ...
I1111 00:39:46.772878 904529 out.go:203] βͺ Configuring RBAC rules ...
I1111 00:39:46.781159 904529 cni.go:93] Creating CNI manager for ""
I1111 00:39:46.781177 904529 cni.go:160] "docker" driver + crio runtime found, recommending kindnet
I1111 00:39:46.782450 904529 out.go:176] π Configuring CNI (Container Networking Interface) ...
I1111 00:39:46.782675 904529 ssh_runner.go:152] Run: stat /opt/cni/bin/portmap
I1111 00:39:46.794747 904529 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.22.3/kubectl ...
I1111 00:39:46.794767 904529 ssh_runner.go:319] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
I1111 00:39:46.836018 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
I1111 00:39:47.046627 904529 ssh_runner.go:152] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
I1111 00:39:47.046711 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl label nodes minikube.k8s.io/version=v1.24.0 minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_11_11T00_39_47_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:47.046716 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:47.152543 904529 ops.go:34] apiserver oom_adj: -16
I1111 00:39:47.152615 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:47.705937 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:48.205892 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:48.705401 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:49.205836 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:49.705903 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:50.205873 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:50.705909 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:51.205951 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:51.705732 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:52.205642 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:52.705469 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:53.205979 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:53.705647 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:54.206033 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:54.705262 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:55.205873 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:55.705853 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:56.205494 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:56.705663 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:57.206140 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:57.705806 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:58.205883 904529 ssh_runner.go:152] Run: sudo /var/lib/minikube/binaries/v1.22.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
I1111 00:39:58.297623 904529 kubeadm.go:985] duration metric: took 11.250994895s to wait for elevateKubeSystemPrivileges.
I1111 00:39:58.297634 904529 kubeadm.go:392] StartCluster complete in 27.591940364s
I1111 00:39:58.297645 904529 settings.go:142] acquiring lock: {Name:mkfcabfaf26c8f9dcb120b9c671fc9a94e9ab0dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:58.297713 904529 settings.go:150] Updating kubeconfig: /home/muth/.kube/config
I1111 00:39:58.307730 904529 lock.go:35] WriteFile acquiring /home/muth/.kube/config: {Name:mkd191758c249050aed554be891d0292e8c028ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1111 00:39:58.829032 904529 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1
I1111 00:39:58.829111 904529 start.go:229] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}
I1111 00:39:58.829153 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
I1111 00:39:58.830216 904529 out.go:176] π Verifying Kubernetes components...
I1111 00:39:58.829231 904529 addons.go:415] enableAddons start: toEnable=map[], additional=[]
I1111 00:39:58.829516 904529 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.22.3
I1111 00:39:58.830360 904529 ssh_runner.go:152] Run: sudo systemctl is-active --quiet service kubelet
I1111 00:39:58.830383 904529 addons.go:65] Setting storage-provisioner=true in profile "minikube"
I1111 00:39:58.830430 904529 addons.go:65] Setting default-storageclass=true in profile "minikube"
I1111 00:39:58.830445 904529 addons.go:153] Setting addon storage-provisioner=true in "minikube"
W1111 00:39:58.830460 904529 addons.go:165] addon storage-provisioner should already be in state true
I1111 00:39:58.830463 904529 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube"
I1111 00:39:58.830503 904529 host.go:66] Checking if "minikube" exists ...
I1111 00:39:58.831160 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1111 00:39:58.831473 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1111 00:39:58.881858 904529 out.go:176] βͺ Using image gcr.io/k8s-minikube/storage-provisioner:v5
I1111 00:39:58.881911 904529 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml
I1111 00:39:58.881917 904529 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
I1111 00:39:58.881966 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:58.887220 904529 addons.go:153] Setting addon default-storageclass=true in "minikube"
W1111 00:39:58.887226 904529 addons.go:165] addon default-storageclass should already be in state true
I1111 00:39:58.887240 904529 host.go:66] Checking if "minikube" exists ...
I1111 00:39:58.887487 904529 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I1111 00:39:58.887917 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.22.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
I1111 00:39:58.893724 904529 node_ready.go:35] waiting up to 6m0s for node "minikube" to be "Ready" ...
I1111 00:39:58.896400 904529 node_ready.go:49] node "minikube" has status "Ready":"True"
I1111 00:39:58.896405 904529 node_ready.go:38] duration metric: took 2.657816ms waiting for node "minikube" to be "Ready" ...
I1111 00:39:58.896409 904529 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1111 00:39:58.903263 904529 pod_ready.go:78] waiting up to 6m0s for pod "etcd-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:39:58.909368 904529 pod_ready.go:92] pod "etcd-minikube" in "kube-system" namespace has status "Ready":"True"
I1111 00:39:58.909373 904529 pod_ready.go:81] duration metric: took 6.100965ms waiting for pod "etcd-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:39:58.909381 904529 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:39:58.926036 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:58.930029 904529 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml
I1111 00:39:58.930035 904529 ssh_runner.go:319] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
I1111 00:39:58.930071 904529 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1111 00:39:58.964881 904529 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32982 SSHKeyPath:/home/muth/.minikube/machines/minikube/id_rsa Username:docker}
I1111 00:39:59.063767 904529 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
I1111 00:39:59.173777 904529 start.go:739] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
I1111 00:39:59.274840 904529 ssh_runner.go:152] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
I1111 00:40:00.968296 904529 pod_ready.go:102] pod "kube-apiserver-minikube" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:00.968687 904529 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904877884s)
I1111 00:40:01.061630 904529 ssh_runner.go:192] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.22.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.786734239s)
I1111 00:40:01.063256 904529 out.go:176] π Enabled addons: storage-provisioner, default-storageclass
I1111 00:40:01.063304 904529 addons.go:417] enableAddons completed in 2.234095862s
I1111 00:40:02.925933 904529 pod_ready.go:92] pod "kube-apiserver-minikube" in "kube-system" namespace has status "Ready":"True"
I1111 00:40:02.925963 904529 pod_ready.go:81] duration metric: took 4.016572517s waiting for pod "kube-apiserver-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:40:02.925996 904529 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:40:02.936676 904529 pod_ready.go:92] pod "kube-controller-manager-minikube" in "kube-system" namespace has status "Ready":"True"
I1111 00:40:02.936692 904529 pod_ready.go:81] duration metric: took 10.676779ms waiting for pod "kube-controller-manager-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:40:02.936712 904529 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vv4gk" in "kube-system" namespace to be "Ready" ...
I1111 00:40:04.960676 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:06.960836 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:09.459563 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:11.951095 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:13.960369 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:16.459961 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:18.960724 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:21.459026 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:23.960960 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:26.459308 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:28.959169 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:30.961023 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:33.451434 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:35.458919 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:37.459307 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:39.460563 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:41.960207 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:43.961138 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:46.459227 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:48.960036 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:51.455769 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:53.950227 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:55.960338 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:40:58.458697 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:00.459693 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:02.961299 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:05.455639 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:07.459863 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:09.460178 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:11.960164 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:13.961759 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:16.458849 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:18.459574 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:20.459843 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:22.960195 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:25.452513 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:27.459111 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:29.953712 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:31.960122 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:34.459220 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:36.960449 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:39.461031 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:41.960382 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:43.961126 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:46.459799 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:48.952329 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:50.960313 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:53.456489 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:55.459312 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:41:57.958931 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:00.459195 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:02.959721 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:05.459598 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:07.959870 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:09.965660 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:12.459323 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:14.959679 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:16.961958 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:19.459452 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:21.459661 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:23.961249 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:26.459884 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:28.959207 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:30.960680 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:33.452772 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:35.459065 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:37.460346 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:39.959680 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:41.959896 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:43.960976 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:46.552525 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:48.956574 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:50.963203 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:53.456414 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:55.460211 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:42:57.960523 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:00.459422 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:02.951616 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:04.961458 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:07.460396 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:09.956446 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:11.959638 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:13.962092 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:16.459330 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:18.459421 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:20.959595 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:22.960684 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:25.456815 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:27.459908 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:29.950428 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:31.960824 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:33.961497 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:36.458828 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:38.459343 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:40.960703 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:43.450594 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:45.459910 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:47.960707 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:49.961139 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:52.458975 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:54.459192 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:56.960110 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:43:59.458958 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:44:01.959329 904529 pod_ready.go:102] pod "kube-proxy-vv4gk" in "kube-system" namespace has status "Ready":"False"
I1111 00:44:02.969579 904529 pod_ready.go:81] duration metric: took 4m0.032846614s waiting for pod "kube-proxy-vv4gk" in "kube-system" namespace to be "Ready" ...
E1111 00:44:02.969602 904529 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
I1111 00:44:02.969659 904529 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:44:02.979952 904529 pod_ready.go:92] pod "kube-scheduler-minikube" in "kube-system" namespace has status "Ready":"True"
I1111 00:44:02.979966 904529 pod_ready.go:81] duration metric: took 10.287292ms waiting for pod "kube-scheduler-minikube" in "kube-system" namespace to be "Ready" ...
I1111 00:44:02.979981 904529 pod_ready.go:38] duration metric: took 4m4.083561846s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I1111 00:44:02.980009 904529 api_server.go:51] waiting for apiserver process to appear ...
I1111 00:44:02.980040 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1111 00:44:02.980145 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1111 00:44:03.040641 904529 cri.go:76] found id: "5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b"
I1111 00:44:03.040650 904529 cri.go:76] found id: ""
I1111 00:44:03.040656 904529 logs.go:270] 1 containers: [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b]
I1111 00:44:03.040697 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:03.043804 904529 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1111 00:44:03.043854 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=etcd
I1111 00:44:03.069437 904529 cri.go:76] found id: "dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82"
I1111 00:44:03.069451 904529 cri.go:76] found id: ""
I1111 00:44:03.069457 904529 logs.go:270] 1 containers: [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82]
I1111 00:44:03.069510 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:03.072732 904529 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1111 00:44:03.072784 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=coredns
I1111 00:44:03.098277 904529 cri.go:76] found id: ""
I1111 00:44:03.098286 904529 logs.go:270] 0 containers: []
W1111 00:44:03.098292 904529 logs.go:272] No container was found matching "coredns"
I1111 00:44:03.098296 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1111 00:44:03.098342 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1111 00:44:03.123360 904529 cri.go:76] found id: "7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1"
I1111 00:44:03.123370 904529 cri.go:76] found id: ""
I1111 00:44:03.123374 904529 logs.go:270] 1 containers: [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1]
I1111 00:44:03.123417 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:03.126617 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1111 00:44:03.126659 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1111 00:44:03.151738 904529 cri.go:76] found id: ""
I1111 00:44:03.151748 904529 logs.go:270] 0 containers: []
W1111 00:44:03.151753 904529 logs.go:272] No container was found matching "kube-proxy"
I1111 00:44:03.151758 904529 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I1111 00:44:03.151801 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1111 00:44:03.177756 904529 cri.go:76] found id: ""
I1111 00:44:03.177766 904529 logs.go:270] 0 containers: []
W1111 00:44:03.177772 904529 logs.go:272] No container was found matching "kubernetes-dashboard"
I1111 00:44:03.177777 904529 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I1111 00:44:03.177827 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1111 00:44:03.203518 904529 cri.go:76] found id: "91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d"
I1111 00:44:03.203528 904529 cri.go:76] found id: ""
I1111 00:44:03.203540 904529 logs.go:270] 1 containers: [91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d]
I1111 00:44:03.203584 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:03.206950 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1111 00:44:03.206992 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1111 00:44:03.232264 904529 cri.go:76] found id: "bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415"
I1111 00:44:03.232274 904529 cri.go:76] found id: ""
I1111 00:44:03.232278 904529 logs.go:270] 1 containers: [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415]
I1111 00:44:03.232320 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:03.235614 904529 logs.go:123] Gathering logs for kubelet ...
I1111 00:44:03.235622 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1111 00:44:03.329426 904529 logs.go:123] Gathering logs for kube-apiserver [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b] ...
I1111 00:44:03.329437 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b"
I1111 00:44:03.348420 904529 logs.go:123] Gathering logs for etcd [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82] ...
I1111 00:44:03.348430 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82"
I1111 00:44:03.367229 904529 logs.go:123] Gathering logs for CRI-O ...
I1111 00:44:03.367239 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1111 00:44:03.419069 904529 logs.go:123] Gathering logs for container status ...
I1111 00:44:03.419080 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1111 00:44:03.436498 904529 logs.go:123] Gathering logs for dmesg ...
I1111 00:44:03.436508 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1111 00:44:03.446344 904529 logs.go:123] Gathering logs for describe nodes ...
I1111 00:44:03.446355 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1111 00:44:03.496695 904529 logs.go:123] Gathering logs for kube-scheduler [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1] ...
I1111 00:44:03.496706 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1"
I1111 00:44:03.515400 904529 logs.go:123] Gathering logs for storage-provisioner [91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d] ...
I1111 00:44:03.515410 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d"
I1111 00:44:03.530799 904529 logs.go:123] Gathering logs for kube-controller-manager [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415] ...
I1111 00:44:03.530808 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415"
I1111 00:44:06.053778 904529 ssh_runner.go:152] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1111 00:44:06.107761 904529 api_server.go:71] duration metric: took 4m7.278601729s to wait for apiserver process to appear ...
I1111 00:44:06.107780 904529 api_server.go:87] waiting for apiserver healthz status ...
I1111 00:44:06.107805 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1111 00:44:06.107884 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1111 00:44:06.138880 904529 cri.go:76] found id: "5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b"
I1111 00:44:06.138887 904529 cri.go:76] found id: ""
I1111 00:44:06.138890 904529 logs.go:270] 1 containers: [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b]
I1111 00:44:06.138921 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:06.140890 904529 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1111 00:44:06.140914 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=etcd
I1111 00:44:06.155561 904529 cri.go:76] found id: "dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82"
I1111 00:44:06.155568 904529 cri.go:76] found id: ""
I1111 00:44:06.155570 904529 logs.go:270] 1 containers: [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82]
I1111 00:44:06.155598 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:06.157482 904529 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1111 00:44:06.157509 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=coredns
I1111 00:44:06.172002 904529 cri.go:76] found id: ""
I1111 00:44:06.172010 904529 logs.go:270] 0 containers: []
W1111 00:44:06.172015 904529 logs.go:272] No container was found matching "coredns"
I1111 00:44:06.172020 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1111 00:44:06.172052 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1111 00:44:06.186882 904529 cri.go:76] found id: "7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1"
I1111 00:44:06.186889 904529 cri.go:76] found id: ""
I1111 00:44:06.186892 904529 logs.go:270] 1 containers: [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1]
I1111 00:44:06.186920 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:06.188908 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1111 00:44:06.188931 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1111 00:44:06.203885 904529 cri.go:76] found id: ""
I1111 00:44:06.203892 904529 logs.go:270] 0 containers: []
W1111 00:44:06.203895 904529 logs.go:272] No container was found matching "kube-proxy"
I1111 00:44:06.203898 904529 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I1111 00:44:06.203928 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1111 00:44:06.219010 904529 cri.go:76] found id: ""
I1111 00:44:06.219016 904529 logs.go:270] 0 containers: []
W1111 00:44:06.219020 904529 logs.go:272] No container was found matching "kubernetes-dashboard"
I1111 00:44:06.219023 904529 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I1111 00:44:06.219055 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1111 00:44:06.233705 904529 cri.go:76] found id: "91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d"
I1111 00:44:06.233712 904529 cri.go:76] found id: ""
I1111 00:44:06.233714 904529 logs.go:270] 1 containers: [91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d]
I1111 00:44:06.233743 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:06.235675 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1111 00:44:06.235701 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1111 00:44:06.250883 904529 cri.go:76] found id: "bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415"
I1111 00:44:06.250890 904529 cri.go:76] found id: ""
I1111 00:44:06.250893 904529 logs.go:270] 1 containers: [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415]
I1111 00:44:06.250920 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:06.252808 904529 logs.go:123] Gathering logs for kubelet ...
I1111 00:44:06.252813 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1111 00:44:06.324365 904529 logs.go:123] Gathering logs for dmesg ...
I1111 00:44:06.324375 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1111 00:44:06.334159 904529 logs.go:123] Gathering logs for kube-controller-manager [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415] ...
I1111 00:44:06.334169 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415"
I1111 00:44:06.355620 904529 logs.go:123] Gathering logs for CRI-O ...
I1111 00:44:06.355631 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1111 00:44:06.401420 904529 logs.go:123] Gathering logs for container status ...
I1111 00:44:06.401430 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1111 00:44:06.418750 904529 logs.go:123] Gathering logs for describe nodes ...
I1111 00:44:06.418760 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1111 00:44:06.469607 904529 logs.go:123] Gathering logs for kube-apiserver [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b] ...
I1111 00:44:06.469615 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b"
I1111 00:44:06.488519 904529 logs.go:123] Gathering logs for etcd [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82] ...
I1111 00:44:06.488528 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82"
I1111 00:44:06.506637 904529 logs.go:123] Gathering logs for kube-scheduler [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1] ...
I1111 00:44:06.506646 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1"
I1111 00:44:06.525846 904529 logs.go:123] Gathering logs for storage-provisioner [91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d] ...
I1111 00:44:06.525855 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d"
I1111 00:44:09.041644 904529 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I1111 00:44:09.057981 904529 api_server.go:266] https://192.168.49.2:8443/healthz returned 200:
ok
I1111 00:44:09.060374 904529 api_server.go:140] control plane version: v1.22.3
I1111 00:44:09.060400 904529 api_server.go:130] duration metric: took 2.952608159s to wait for apiserver health ...
I1111 00:44:09.060414 904529 system_pods.go:43] waiting for kube-system pods to appear ...
I1111 00:44:09.060448 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
I1111 00:44:09.060575 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-apiserver
I1111 00:44:09.120983 904529 cri.go:76] found id: "5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b"
I1111 00:44:09.120993 904529 cri.go:76] found id: ""
I1111 00:44:09.120998 904529 logs.go:270] 1 containers: [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b]
I1111 00:44:09.121053 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:09.124316 904529 cri.go:41] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
I1111 00:44:09.124358 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=etcd
I1111 00:44:09.146888 904529 cri.go:76] found id: "dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82"
I1111 00:44:09.146896 904529 cri.go:76] found id: ""
I1111 00:44:09.146900 904529 logs.go:270] 1 containers: [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82]
I1111 00:44:09.146944 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:09.149793 904529 cri.go:41] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
I1111 00:44:09.149827 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=coredns
I1111 00:44:09.174425 904529 cri.go:76] found id: ""
I1111 00:44:09.174433 904529 logs.go:270] 0 containers: []
W1111 00:44:09.174438 904529 logs.go:272] No container was found matching "coredns"
I1111 00:44:09.174442 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
I1111 00:44:09.174483 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-scheduler
I1111 00:44:09.196785 904529 cri.go:76] found id: "7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1"
I1111 00:44:09.196795 904529 cri.go:76] found id: ""
I1111 00:44:09.196798 904529 logs.go:270] 1 containers: [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1]
I1111 00:44:09.196837 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:09.199603 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
I1111 00:44:09.199636 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-proxy
I1111 00:44:09.221321 904529 cri.go:76] found id: ""
I1111 00:44:09.221330 904529 logs.go:270] 0 containers: []
W1111 00:44:09.221334 904529 logs.go:272] No container was found matching "kube-proxy"
I1111 00:44:09.221339 904529 cri.go:41] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
I1111 00:44:09.221378 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
I1111 00:44:09.243508 904529 cri.go:76] found id: ""
I1111 00:44:09.243519 904529 logs.go:270] 0 containers: []
W1111 00:44:09.243527 904529 logs.go:272] No container was found matching "kubernetes-dashboard"
I1111 00:44:09.243549 904529 cri.go:41] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
I1111 00:44:09.243591 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=storage-provisioner
I1111 00:44:09.265640 904529 cri.go:76] found id: "91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d"
I1111 00:44:09.265650 904529 cri.go:76] found id: ""
I1111 00:44:09.265655 904529 logs.go:270] 1 containers: [91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d]
I1111 00:44:09.265693 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:09.268403 904529 cri.go:41] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
I1111 00:44:09.268438 904529 ssh_runner.go:152] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
I1111 00:44:09.290054 904529 cri.go:76] found id: "bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415"
I1111 00:44:09.290063 904529 cri.go:76] found id: ""
I1111 00:44:09.290067 904529 logs.go:270] 1 containers: [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415]
I1111 00:44:09.290107 904529 ssh_runner.go:152] Run: which crictl
I1111 00:44:09.292867 904529 logs.go:123] Gathering logs for kube-controller-manager [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415] ...
I1111 00:44:09.292876 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415"
I1111 00:44:09.325273 904529 logs.go:123] Gathering logs for container status ...
I1111 00:44:09.325284 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
I1111 00:44:09.350855 904529 logs.go:123] Gathering logs for kubelet ...
I1111 00:44:09.350867 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
I1111 00:44:09.448370 904529 logs.go:123] Gathering logs for dmesg ...
I1111 00:44:09.448379 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
I1111 00:44:09.458315 904529 logs.go:123] Gathering logs for storage-provisioner [91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d] ...
I1111 00:44:09.458325 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 91dd96cadcccbbf4514cf0390f37525ff84437f7cda3082da43eb0a899aa7c0d"
I1111 00:44:09.473649 904529 logs.go:123] Gathering logs for kube-scheduler [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1] ...
I1111 00:44:09.473659 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1"
I1111 00:44:09.492664 904529 logs.go:123] Gathering logs for CRI-O ...
I1111 00:44:09.492673 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
I1111 00:44:09.537278 904529 logs.go:123] Gathering logs for describe nodes ...
I1111 00:44:09.537289 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.22.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
I1111 00:44:09.588205 904529 logs.go:123] Gathering logs for kube-apiserver [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b] ...
I1111 00:44:09.588214 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b"
I1111 00:44:09.607282 904529 logs.go:123] Gathering logs for etcd [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82] ...
I1111 00:44:09.607293 904529 ssh_runner.go:152] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82"
I1111 00:44:12.142982 904529 system_pods.go:59] 8 kube-system pods found
I1111 00:44:12.143030 904529 system_pods.go:61] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:12.143044 904529 system_pods.go:61] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:12.143057 904529 system_pods.go:61] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:12.143067 904529 system_pods.go:61] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:12.143077 904529 system_pods.go:61] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:12.143091 904529 system_pods.go:61] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:12.143102 904529 system_pods.go:61] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:12.143116 904529 system_pods.go:61] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:12.143129 904529 system_pods.go:74] duration metric: took 3.082706659s to wait for pod list to return data ...
I1111 00:44:12.143143 904529 default_sa.go:34] waiting for default service account to be created ...
I1111 00:44:12.149519 904529 default_sa.go:45] found service account: "default"
I1111 00:44:12.149577 904529 default_sa.go:55] duration metric: took 6.382064ms for default service account to be created ...
I1111 00:44:12.149596 904529 system_pods.go:116] waiting for k8s-apps to be running ...
I1111 00:44:12.162992 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:12.163029 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:12.163043 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:12.163057 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:12.163068 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:12.163079 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:12.163094 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:12.163105 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:12.163120 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:12.163150 904529 retry.go:31] will retry after 263.082536ms: missing components: kube-dns, kube-proxy
I1111 00:44:12.439920 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:12.439957 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:12.439970 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:12.439984 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:12.439995 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:12.440006 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:12.440021 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:12.440035 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:12.440052 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:12.440081 904529 retry.go:31] will retry after 381.329545ms: missing components: kube-dns, kube-proxy
I1111 00:44:12.826129 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:12.826142 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:12.826146 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:12.826150 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:12.826153 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:12.826156 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:12.826161 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:12.826164 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:12.826168 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:12.826180 904529 retry.go:31] will retry after 422.765636ms: missing components: kube-dns, kube-proxy
I1111 00:44:13.253292 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:13.253303 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:13.253307 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:13.253311 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:13.253314 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:13.253317 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:13.253320 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:13.253323 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:13.253329 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:13.253337 904529 retry.go:31] will retry after 473.074753ms: missing components: kube-dns, kube-proxy
I1111 00:44:13.742394 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:13.742430 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:13.742444 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:13.742458 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:13.742469 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:13.742480 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:13.742495 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:13.742505 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:13.742520 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:13.742579 904529 retry.go:31] will retry after 587.352751ms: missing components: kube-dns, kube-proxy
I1111 00:44:14.337179 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:14.337196 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:14.337201 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:14.337207 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:14.337212 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:14.337216 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:14.337222 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:14.337227 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:14.337233 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:14.337246 904529 retry.go:31] will retry after 834.206799ms: missing components: kube-dns, kube-proxy
I1111 00:44:15.186945 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:15.186982 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:15.187000 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:15.187013 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:15.187024 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:15.187035 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:15.187050 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:15.187061 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:15.187077 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:15.187103 904529 retry.go:31] will retry after 746.553905ms: missing components: kube-dns, kube-proxy
I1111 00:44:15.949019 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:15.949054 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:15.949068 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:15.949081 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:15.949092 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:15.949103 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:15.949117 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:15.949128 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:15.949146 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:15.949173 904529 retry.go:31] will retry after 987.362415ms: missing components: kube-dns, kube-proxy
I1111 00:44:16.952174 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:16.952211 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:16.952225 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:16.952238 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:16.952249 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:16.952261 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:16.952275 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:16.952286 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:16.952300 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:16.952327 904529 retry.go:31] will retry after 1.189835008s: missing components: kube-dns, kube-proxy
I1111 00:44:18.157234 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:18.157264 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:18.157276 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:18.157287 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:18.157296 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:18.157305 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:18.157318 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:18.157327 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:18.157341 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:18.157364 904529 retry.go:31] will retry after 1.677229867s: missing components: kube-dns, kube-proxy
I1111 00:44:19.850716 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:19.850755 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:19.850768 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:19.850782 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:19.850793 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:19.850804 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:19.850819 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:19.850829 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:19.850844 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:19.850871 904529 retry.go:31] will retry after 2.346016261s: missing components: kube-dns, kube-proxy
I1111 00:44:22.212869 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:22.212903 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:22.212918 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:22.212932 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:22.212943 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:22.212955 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:22.212970 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:22.212980 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:22.212995 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:22.213021 904529 retry.go:31] will retry after 3.36678925s: missing components: kube-dns, kube-proxy
I1111 00:44:25.583831 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:25.583841 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:25.583844 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:25.583848 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:25.583850 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:25.583853 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:25.583855 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:25.583858 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:25.583861 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:25.583869 904529 retry.go:31] will retry after 3.11822781s: missing components: kube-dns, kube-proxy
I1111 00:44:28.715759 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:28.715796 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:28.715809 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:28.715822 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:28.715833 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:28.715844 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:28.715859 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:28.715870 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:28.715884 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:28.715911 904529 retry.go:31] will retry after 4.276119362s: missing components: kube-dns, kube-proxy
I1111 00:44:33.010666 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:33.010702 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:33.010714 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:33.010728 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:33.010739 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:33.010749 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:33.010766 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:33.010776 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:33.010792 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:33.010818 904529 retry.go:31] will retry after 5.167232101s: missing components: kube-dns, kube-proxy
I1111 00:44:38.201064 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:38.201107 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:38.201120 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:38.201135 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:38.201147 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:38.201159 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:38.201175 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:38.201185 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:38.201202 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:38.201229 904529 retry.go:31] will retry after 6.994901864s: missing components: kube-dns, kube-proxy
I1111 00:44:45.212312 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:45.212355 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:45.212371 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:45.212386 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:45.212397 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:45.212407 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:45.212422 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:45.212433 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:45.212448 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:45.212476 904529 retry.go:31] will retry after 7.91826225s: missing components: kube-dns, kube-proxy
I1111 00:44:53.139750 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:44:53.139765 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:44:53.139770 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:44:53.139775 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:44:53.139778 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:44:53.139782 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:44:53.139787 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:44:53.139790 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:44:53.139795 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:44:53.139809 904529 retry.go:31] will retry after 9.953714808s: missing components: kube-dns, kube-proxy
I1111 00:45:03.111560 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:45:03.111596 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:45:03.111609 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:45:03.111623 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:45:03.111634 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:45:03.111645 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:45:03.111660 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:45:03.111670 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:45:03.111685 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:45:03.111716 904529 retry.go:31] will retry after 15.120437328s: missing components: kube-dns, kube-proxy
I1111 00:45:18.252089 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:45:18.252126 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:45:18.252140 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:45:18.252153 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:45:18.252164 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:45:18.252176 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:45:18.252190 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:45:18.252201 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:45:18.252216 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:45:18.252243 904529 retry.go:31] will retry after 14.90607158s: missing components: kube-dns, kube-proxy
I1111 00:45:33.169820 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:45:33.169844 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:45:33.169851 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:45:33.169861 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1111 00:45:33.169868 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:45:33.169875 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:45:33.169882 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:45:33.169888 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:45:33.169896 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:45:33.169912 904529 retry.go:31] will retry after 18.465989061s: missing components: kube-dns, kube-proxy
I1111 00:45:51.654108 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:45:51.654150 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:45:51.654164 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:45:51.654178 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:45:51.654189 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:45:51.654201 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:45:51.654216 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:45:51.654228 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:45:51.654240 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running
I1111 00:45:51.654268 904529 retry.go:31] will retry after 25.219510332s: missing components: kube-dns, kube-proxy
I1111 00:46:16.888868 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:46:16.888903 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:46:16.888916 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:46:16.888930 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:46:16.888941 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:46:16.888952 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:46:16.888967 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:46:16.888978 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:46:16.888993 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:46:16.889018 904529 retry.go:31] will retry after 35.078569648s: missing components: kube-dns, kube-proxy
I1111 00:46:51.984865 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:46:51.984904 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:46:51.984919 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:46:51.984934 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:46:51.984945 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:46:51.984956 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:46:51.984971 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:46:51.984982 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:46:51.984997 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:46:51.985024 904529 retry.go:31] will retry after 50.027701973s: missing components: kube-dns, kube-proxy
I1111 00:47:42.029920 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:47:42.029956 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:47:42.029970 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:47:42.029983 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running
I1111 00:47:42.029995 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:47:42.030006 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:47:42.030021 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:47:42.030031 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:47:42.030046 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:47:42.030072 904529 retry.go:31] will retry after 47.463338706s: missing components: kube-dns, kube-proxy
I1111 00:48:29.508170 904529 system_pods.go:86] 8 kube-system pods found
I1111 00:48:29.508207 904529 system_pods.go:89] "coredns-78fcd69978-rnpww" [4446202f-a6a4-466b-bfda-b6cfbc909d43] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
I1111 00:48:29.508222 904529 system_pods.go:89] "etcd-minikube" [bc1ce200-3f09-4153-bded-b4e21cb33b36] Running
I1111 00:48:29.508241 904529 system_pods.go:89] "kindnet-6848p" [8310e0ee-e45a-46ad-bb59-cdea08b0b146] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
I1111 00:48:29.508255 904529 system_pods.go:89] "kube-apiserver-minikube" [d4a95c3f-153e-45a7-8bdf-0dd6856f5bf4] Running
I1111 00:48:29.508267 904529 system_pods.go:89] "kube-controller-manager-minikube" [791a6e72-5ec5-44dc-b216-17fe6ddd0f3a] Running
I1111 00:48:29.508282 904529 system_pods.go:89] "kube-proxy-vv4gk" [b25d1a55-5cf6-452f-bcf2-f4c499ceb767] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
I1111 00:48:29.508292 904529 system_pods.go:89] "kube-scheduler-minikube" [c5bfae7b-6680-4cea-8b97-971fd52302f2] Running
I1111 00:48:29.508306 904529 system_pods.go:89] "storage-provisioner" [b883d1e7-308f-4407-addf-816d3107afce] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
I1111 00:48:29.508331 904529 retry.go:31] will retry after 53.912476906s: missing components: kube-dns, kube-proxy
*
* ==> CRI-O <==
* -- Logs begin at Thu 2021-11-11 08:39:25 UTC, end at Thu 2021-11-11 08:49:15 UTC. --
Nov 11 08:48:47 minikube crio[379]: time="2021-11-11 08:48:47.867726942Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=0406d9a8-6f8c-42be-a8c1-9c06ee78c78f name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:47 minikube crio[379]: time="2021-11-11 08:48:47.872601252Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/65bd04a7e83ca0fddb6c6c67f7db9d13357cfd41ce10277e03cb0b78e25ccf33/merged/etc/passwd: no such file or directory"
Nov 11 08:48:47 minikube crio[379]: time="2021-11-11 08:48:47.872623397Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/65bd04a7e83ca0fddb6c6c67f7db9d13357cfd41ce10277e03cb0b78e25ccf33/merged/etc/group: no such file or directory"
Nov 11 08:48:47 minikube crio[379]: time="2021-11-11 08:48:47.939389004Z" level=info msg="Created container 21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5: kube-system/storage-provisioner/storage-provisioner" id=0406d9a8-6f8c-42be-a8c1-9c06ee78c78f name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:47 minikube crio[379]: time="2021-11-11 08:48:47.939732514Z" level=info msg="Starting container: 21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5" id=3af62e68-c833-45bd-ae89-cc5373ba80aa name=/runtime.v1alpha2.RuntimeService/StartContainer
Nov 11 08:48:47 minikube crio[379]: time="2021-11-11 08:48:47.962478693Z" level=info msg="Started container" PID=5570 containerID=21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5 description=kube-system/storage-provisioner/storage-provisioner id=3af62e68-c833-45bd-ae89-cc5373ba80aa name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=745784fbeedaae52520cc2d90500a00428fc356068c24bf0266144b969e50f69
Nov 11 08:48:52 minikube crio[379]: time="2021-11-11 08:48:52.866372356Z" level=info msg="Running pod sandbox: kube-system/coredns-78fcd69978-rnpww/POD" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:52 minikube crio[379]: time="2021-11-11 08:48:52.881310069Z" level=info msg="Got pod network &{Name:coredns-78fcd69978-rnpww Namespace:kube-system ID:ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e UID:4446202f-a6a4-466b-bfda-b6cfbc909d43 NetNS:/var/run/netns/f8d2ebdd-e152-4db1-a010-4be9f5f058ac Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Nov 11 08:48:52 minikube crio[379]: time="2021-11-11 08:48:52.881383628Z" level=info msg="Adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\" (type=bridge)"
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.246833851Z" level=info msg="NetworkStart: stopping network for sandbox ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.248171398Z" level=info msg="Got pod network &{Name:coredns-78fcd69978-rnpww Namespace:kube-system ID:ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e UID:4446202f-a6a4-466b-bfda-b6cfbc909d43 NetNS:/var/run/netns/f8d2ebdd-e152-4db1-a010-4be9f5f058ac Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.248323791Z" level=error msg="error loading cached network config: network \"crio\" not found in CNI cache"
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.248380306Z" level=warning msg="falling back to loading from existing plugins on disk"
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.248425276Z" level=info msg="Deleting pod kube-system_coredns-78fcd69978-rnpww from CNI network \"crio\" (type=bridge)"
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.305591882Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e): error removing pod kube-system_coredns-78fcd69978-rnpww from CNI network \"crio\": running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.38 -j CNI-dd05262e2744fa9c19c995a4 -m comment --comment name: \"crio\" id: \"ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-dd05262e2744fa9c19c995a4':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.305750114Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.306288550Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.306379226Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.306435243Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.306491021Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.306833498Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.312953352Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.313042301Z" level=info msg="runSandbox: deleting pod ID ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e from idIndex" id=a45e00d2-f989-487b-a4f7-251d37d826c7 name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.867229618Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.3" id=315f2f5d-3d0f-4da2-b843-d69370146dfd name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.870285511Z" level=info msg="Image status: &{0xc000521ea0 map[]}" id=315f2f5d-3d0f-4da2-b843-d69370146dfd name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.872686039Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.3" id=aacb6d6f-d91b-403b-bd11-3bfa1aac4d8a name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.874675874Z" level=info msg="Image status: &{0xc0003289a0 map[]}" id=aacb6d6f-d91b-403b-bd11-3bfa1aac4d8a name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.876474552Z" level=info msg="Creating container: kube-system/kube-proxy-vv4gk/kube-proxy" id=32997d47-d639-4ee6-a1bd-4eb449893c63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.974259111Z" level=error msg="Container creation error: time=\"2021-11-11T08:48:54Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" id=32997d47-d639-4ee6-a1bd-4eb449893c63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.993122637Z" level=info msg="createCtr: deleting container ID e28bedb9960df38c2a468af070ca935178a23b9dba5aaa37c9780860e923e90b from idIndex" id=32997d47-d639-4ee6-a1bd-4eb449893c63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.993168064Z" level=info msg="createCtr: deleting container ID e28bedb9960df38c2a468af070ca935178a23b9dba5aaa37c9780860e923e90b from idIndex" id=32997d47-d639-4ee6-a1bd-4eb449893c63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.993183932Z" level=info msg="createCtr: deleting container ID e28bedb9960df38c2a468af070ca935178a23b9dba5aaa37c9780860e923e90b from idIndex" id=32997d47-d639-4ee6-a1bd-4eb449893c63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:48:54 minikube crio[379]: time="2021-11-11 08:48:54.996082543Z" level=info msg="createCtr: deleting container ID e28bedb9960df38c2a468af070ca935178a23b9dba5aaa37c9780860e923e90b from idIndex" id=32997d47-d639-4ee6-a1bd-4eb449893c63 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.867824169Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.3" id=29a510ca-9eb7-4d47-b1a2-641b916a1c7c name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.871201629Z" level=info msg="Image status: &{0xc00050c930 map[]}" id=29a510ca-9eb7-4d47-b1a2-641b916a1c7c name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.873178173Z" level=info msg="Checking image status: k8s.gcr.io/kube-proxy:v1.22.3" id=da1afa29-c17b-4d3a-9c56-d353db95db87 name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.874821405Z" level=info msg="Image status: &{0xc00050d500 map[]}" id=da1afa29-c17b-4d3a-9c56-d353db95db87 name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.876760574Z" level=info msg="Creating container: kube-system/kube-proxy-vv4gk/kube-proxy" id=f3e2992d-0091-4e15-a4c3-a712eb668b21 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.970143609Z" level=error msg="Container creation error: time=\"2021-11-11T08:49:05Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" id=f3e2992d-0091-4e15-a4c3-a712eb668b21 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.977989203Z" level=info msg="createCtr: deleting container ID 7deba96e0ed4395710ddd29978cb2ab6d4991441e93fa91bd3c3679e7aa5ae3b from idIndex" id=f3e2992d-0091-4e15-a4c3-a712eb668b21 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.978029230Z" level=info msg="createCtr: deleting container ID 7deba96e0ed4395710ddd29978cb2ab6d4991441e93fa91bd3c3679e7aa5ae3b from idIndex" id=f3e2992d-0091-4e15-a4c3-a712eb668b21 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.978044225Z" level=info msg="createCtr: deleting container ID 7deba96e0ed4395710ddd29978cb2ab6d4991441e93fa91bd3c3679e7aa5ae3b from idIndex" id=f3e2992d-0091-4e15-a4c3-a712eb668b21 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:05 minikube crio[379]: time="2021-11-11 08:49:05.980870770Z" level=info msg="createCtr: deleting container ID 7deba96e0ed4395710ddd29978cb2ab6d4991441e93fa91bd3c3679e7aa5ae3b from idIndex" id=f3e2992d-0091-4e15-a4c3-a712eb668b21 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 11 08:49:08 minikube crio[379]: time="2021-11-11 08:49:08.867398197Z" level=info msg="Running pod sandbox: kube-system/coredns-78fcd69978-rnpww/POD" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:08 minikube crio[379]: time="2021-11-11 08:49:08.888460145Z" level=info msg="Got pod network &{Name:coredns-78fcd69978-rnpww Namespace:kube-system ID:fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 UID:4446202f-a6a4-466b-bfda-b6cfbc909d43 NetNS:/var/run/netns/8f13ecf2-d64f-4f18-a466-24cc4914f24b Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Nov 11 08:49:08 minikube crio[379]: time="2021-11-11 08:49:08.888590641Z" level=info msg="Adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\" (type=bridge)"
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.916840829Z" level=info msg="NetworkStart: stopping network for sandbox fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.917835476Z" level=info msg="Got pod network &{Name:coredns-78fcd69978-rnpww Namespace:kube-system ID:fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 UID:4446202f-a6a4-466b-bfda-b6cfbc909d43 NetNS:/var/run/netns/8f13ecf2-d64f-4f18-a466-24cc4914f24b Networks:[] RuntimeConfig:map[crio:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.917929012Z" level=error msg="error loading cached network config: network \"crio\" not found in CNI cache"
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.917958669Z" level=warning msg="falling back to loading from existing plugins on disk"
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.917985314Z" level=info msg="Deleting pod kube-system_coredns-78fcd69978-rnpww from CNI network \"crio\" (type=bridge)"
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.976695408Z" level=error msg="Error stopping network on cleanup: failed to destroy network for pod sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55): error removing pod kube-system_coredns-78fcd69978-rnpww from CNI network \"crio\": running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.85.0.39 -j CNI-0bafe5b2ceba2bb71ac8317b -m comment --comment name: \"crio\" id: \"fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55\" --wait]: exit status 2: iptables v1.8.4 (legacy): Couldn't load target `CNI-0bafe5b2ceba2bb71ac8317b':No such file or directory\n\nTry `iptables -h' or 'iptables --help' for more information.\n" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.976828076Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.977337267Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.977413618Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.977460584Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.977501480Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.977853123Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.983562666Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
Nov 11 08:49:10 minikube crio[379]: time="2021-11-11 08:49:10.983651487Z" level=info msg="runSandbox: deleting pod ID fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55 from idIndex" id=89af166a-3880-4aaa-ae41-78734e06577c name=/runtime.v1alpha2.RuntimeService/RunPodSandbox
*
* ==> container status <==
* CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
21e6a470cefee 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 27 seconds ago Running storage-provisioner 6 745784fbeedaa
fb1b24913a642 6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb 30 seconds ago Running kindnet-cni 3 10c653e75d0c4
f573b52514955 6de166512aa223315ff9cfd49bd4f13aab1591cd8fc57e31270f0e4aa34129cb 3 minutes ago Exited kindnet-cni 2 10c653e75d0c4
69cf82bf72d4a 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562 3 minutes ago Exited storage-provisioner 5 745784fbeedaa
7c14e2730cf75 0aa9c7e31d307d1012fb9e63c274f1110868709a2c39f770dd82120cd2b8fe0f 9 minutes ago Running kube-scheduler 0 fa64f8a204344
dbc9db510eb0f 0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76ba 9 minutes ago Running etcd 0 38b5470ce2f59
bd5dd9a845f69 05c905cef780c060cdaad6bdb2be2d71a03c0b9cb8b7cc10c2f68a6d36abd30d 9 minutes ago Running kube-controller-manager 0 6b6af3c86b2bc
5b5b1fdaf5362 53224b502ea4de7925ca5ed3d8a43dd4b500b2e8e4872bf9daea1fc3fec05edc 9 minutes ago Running kube-apiserver 0 30a0e41f011c9
*
* ==> describe nodes <==
* Name: minikube
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=76b94fb3c4e8ac5062daf70d60cf03ddcc0a741b
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2021_11_11T00_39_47_0700
minikube.k8s.io/version=v1.24.0
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
node.kubernetes.io/exclude-from-external-load-balancers=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Thu, 11 Nov 2021 08:39:43 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime: <unset>
RenewTime: Thu, 11 Nov 2021 08:49:13 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Thu, 11 Nov 2021 08:45:02 +0000 Thu, 11 Nov 2021 08:39:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 11 Nov 2021 08:45:02 +0000 Thu, 11 Nov 2021 08:39:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 11 Nov 2021 08:45:02 +0000 Thu, 11 Nov 2021 08:39:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 11 Nov 2021 08:45:02 +0000 Thu, 11 Nov 2021 08:39:58 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
Capacity:
cpu: 12
ephemeral-storage: 389987396Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32710360Ki
pods: 110
Allocatable:
cpu: 12
ephemeral-storage: 389987396Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 32710360Ki
pods: 110
System Info:
Machine ID: bba0be70c47c400ea3cf7733f1c0b4c1
System UUID: 8e811e93-7aff-49dc-9832-a4860778eb33
Boot ID: 478aea25-bf20-4671-906e-9f335f049794
Kernel Version: 5.13.12-100.fc33.x86_64
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: cri-o://1.22.0
Kubelet Version: v1.22.3
Kube-Proxy Version: v1.22.3
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-78fcd69978-rnpww 100m (0%!)(MISSING) 0 (0%!)(MISSING) 70Mi (0%!)(MISSING) 170Mi (0%!)(MISSING) 9m17s
kube-system etcd-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 100Mi (0%!)(MISSING) 0 (0%!)(MISSING) 9m30s
kube-system kindnet-6848p 100m (0%!)(MISSING) 100m (0%!)(MISSING) 50Mi (0%!)(MISSING) 50Mi (0%!)(MISSING) 9m17s
kube-system kube-apiserver-minikube 250m (2%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m23s
kube-system kube-controller-manager-minikube 200m (1%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m31s
kube-system kube-proxy-vv4gk 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m17s
kube-system kube-scheduler-minikube 100m (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m23s
kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 9m15s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (7%!)(MISSING) 100m (0%!)(MISSING)
memory 220Mi (0%!)(MISSING) 220Mi (0%!)(MISSING)
ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-1Gi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 9m24s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 9m24s kubelet Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 9m24s kubelet Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 9m24s kubelet Node minikube status is now: NodeHasSufficientPID
Normal NodeReady 9m17s kubelet Node minikube status is now: NodeReady
*
* ==> dmesg <==
*
*
* ==> etcd [dbc9db510eb0f8d6a1364a8d9bcc6aaa51d92f0f25a5f240851a1f320dff5d82] <==
* {"level":"info","ts":"2021-11-11T08:39:39.254Z","caller":"etcdmain/etcd.go:72","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
{"level":"info","ts":"2021-11-11T08:39:39.255Z","caller":"embed/etcd.go:131","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2021-11-11T08:39:39.255Z","caller":"embed/etcd.go:478","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2021-11-11T08:39:39.256Z","caller":"embed/etcd.go:139","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]}
{"level":"info","ts":"2021-11-11T08:39:39.257Z","caller":"embed/etcd.go:307","msg":"starting an etcd server","etcd-version":"3.5.0","git-sha":"946a5a6f2","go-version":"go1.16.3","go-os":"linux","go-arch":"amd64","max-cpu-set":12,"max-cpu-available":12,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-size-bytes":2147483648,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
{"level":"info","ts":"2021-11-11T08:39:39.260Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"1.766805ms"}
{"level":"info","ts":"2021-11-11T08:39:39.267Z","caller":"etcdserver/raft.go:448","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"}
{"level":"info","ts":"2021-11-11T08:39:39.268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"}
{"level":"info","ts":"2021-11-11T08:39:39.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"}
{"level":"info","ts":"2021-11-11T08:39:39.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"}
{"level":"info","ts":"2021-11-11T08:39:39.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"}
{"level":"info","ts":"2021-11-11T08:39:39.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"warn","ts":"2021-11-11T08:39:39.271Z","caller":"auth/store.go:1220","msg":"simple token is not cryptographically signed"}
{"level":"info","ts":"2021-11-11T08:39:39.348Z","caller":"mvcc/kvstore.go:415","msg":"kvstore restored","current-rev":1}
{"level":"info","ts":"2021-11-11T08:39:39.349Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
{"level":"info","ts":"2021-11-11T08:39:39.352Z","caller":"etcdserver/server.go:843","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.0","cluster-version":"to_be_decided"}
{"level":"info","ts":"2021-11-11T08:39:39.352Z","caller":"etcdserver/server.go:728","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
{"level":"info","ts":"2021-11-11T08:39:39.358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
{"level":"info","ts":"2021-11-11T08:39:39.359Z","caller":"membership/cluster.go:393","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
{"level":"info","ts":"2021-11-11T08:39:39.366Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2021-11-11T08:39:39.366Z","caller":"embed/etcd.go:580","msg":"serving peer traffic","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-11-11T08:39:39.366Z","caller":"embed/etcd.go:552","msg":"cmux::serve","address":"192.168.49.2:2380"}
{"level":"info","ts":"2021-11-11T08:39:39.367Z","caller":"embed/etcd.go:276","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
{"level":"info","ts":"2021-11-11T08:39:39.367Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","caller":"etcdserver/server.go:2027","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","caller":"etcdserver/server.go:2476","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
{"level":"info","ts":"2021-11-11T08:39:40.270Z","caller":"etcdmain/main.go:47","msg":"notifying init daemon"}
{"level":"info","ts":"2021-11-11T08:39:40.271Z","caller":"etcdmain/main.go:53","msg":"successfully notified init daemon"}
{"level":"info","ts":"2021-11-11T08:39:40.271Z","caller":"membership/cluster.go:531","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-11T08:39:40.271Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-11T08:39:40.271Z","caller":"etcdserver/server.go:2500","msg":"cluster version is updated","cluster-version":"3.5"}
{"level":"info","ts":"2021-11-11T08:39:40.271Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
{"level":"info","ts":"2021-11-11T08:39:40.271Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.49.2:2379"}
*
* ==> kernel <==
* 08:49:15 up 21 days, 6:02, 0 users, load average: 2.64, 2.01, 1.81
Linux minikube 5.13.12-100.fc33.x86_64 #1 SMP Wed Aug 18 20:12:01 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 20.04.2 LTS"
*
* ==> kube-apiserver [5b5b1fdaf536248d33c9a192f3558deea1324e36a16dcaaa42f8db37afdf195b] <==
* W1111 08:39:41.791014 1 genericapiserver.go:455] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1111 08:39:41.792624 1 genericapiserver.go:455] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W1111 08:39:41.796004 1 genericapiserver.go:455] Skipping API apps/v1beta2 because it has no resources.
W1111 08:39:41.796013 1 genericapiserver.go:455] Skipping API apps/v1beta1 because it has no resources.
W1111 08:39:41.797513 1 genericapiserver.go:455] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I1111 08:39:41.800476 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I1111 08:39:41.800484 1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W1111 08:39:41.815229 1 genericapiserver.go:455] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I1111 08:39:43.110757 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1111 08:39:43.110772 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1111 08:39:43.110964 1 dynamic_serving_content.go:129] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
I1111 08:39:43.111125 1 secure_serving.go:266] Serving securely on [::]:8443
I1111 08:39:43.111158 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1111 08:39:43.111183 1 dynamic_serving_content.go:129] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
I1111 08:39:43.111213 1 autoregister_controller.go:141] Starting autoregister controller
I1111 08:39:43.111221 1 cache.go:32] Waiting for caches to sync for autoregister controller
I1111 08:39:43.111240 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I1111 08:39:43.111244 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I1111 08:39:43.111277 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I1111 08:39:43.111283 1 establishing_controller.go:76] Starting EstablishingController
I1111 08:39:43.111297 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I1111 08:39:43.111308 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1111 08:39:43.111320 1 crd_finalizer.go:266] Starting CRDFinalizer
I1111 08:39:43.111329 1 naming_controller.go:291] Starting NamingConditionController
I1111 08:39:43.111334 1 controller.go:85] Starting OpenAPI controller
I1111 08:39:43.111385 1 controller.go:83] Starting OpenAPI AggregationController
I1111 08:39:43.111405 1 apf_controller.go:312] Starting API Priority and Fairness config controller
I1111 08:39:43.111474 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1111 08:39:43.111481 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I1111 08:39:43.111503 1 dynamic_cafile_content.go:155] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
I1111 08:39:43.111542 1 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
I1111 08:39:43.111549 1 apiservice_controller.go:97] Starting APIServiceRegistrationController
I1111 08:39:43.111655 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1111 08:39:43.111557 1 available_controller.go:491] Starting AvailableConditionController
I1111 08:39:43.111748 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
E1111 08:39:43.112262 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.49.2, ResourceVersion: 0, AdditionalErrorMsg:
I1111 08:39:43.150832 1 shared_informer.go:247] Caches are synced for node_authorizer
I1111 08:39:43.153903 1 controller.go:611] quota admission added evaluator for: namespaces
I1111 08:39:43.246604 1 cache.go:39] Caches are synced for AvailableConditionController controller
I1111 08:39:43.246671 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1111 08:39:43.247038 1 apf_controller.go:317] Running API Priority and Fairness config worker
I1111 08:39:43.247168 1 cache.go:39] Caches are synced for autoregister controller
I1111 08:39:43.247226 1 shared_informer.go:247] Caches are synced for crd-autoregister
I1111 08:39:43.247269 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller
I1111 08:39:44.111845 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1111 08:39:44.112059 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1111 08:39:44.122356 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
I1111 08:39:44.132996 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
I1111 08:39:44.133049 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1111 08:39:44.691967 1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1111 08:39:44.800385 1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1111 08:39:44.992308 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
I1111 08:39:44.994976 1 controller.go:611] quota admission added evaluator for: endpoints
I1111 08:39:45.004358 1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
I1111 08:39:45.193093 1 controller.go:611] quota admission added evaluator for: serviceaccounts
I1111 08:39:46.568161 1 controller.go:611] quota admission added evaluator for: deployments.apps
I1111 08:39:46.586328 1 controller.go:611] quota admission added evaluator for: daemonsets.apps
I1111 08:39:51.854099 1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
I1111 08:39:58.692152 1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
I1111 08:39:58.806406 1 controller.go:611] quota admission added evaluator for: replicasets.apps
*
* ==> kube-controller-manager [bd5dd9a845f699ff29b07057ce76506ccb0b09a8db5b633e3b5d5aa854441415] <==
* I1111 08:39:57.950496 1 shared_informer.go:240] Waiting for caches to sync for endpoint
I1111 08:39:57.952056 1 shared_informer.go:240] Waiting for caches to sync for resource quota
I1111 08:39:57.974411 1 shared_informer.go:247] Caches are synced for cronjob
I1111 08:39:57.974720 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1111 08:39:58.046840 1 shared_informer.go:247] Caches are synced for crt configmap
I1111 08:39:58.046981 1 shared_informer.go:247] Caches are synced for namespace
I1111 08:39:58.046966 1 shared_informer.go:247] Caches are synced for expand
I1111 08:39:58.046907 1 shared_informer.go:247] Caches are synced for TTL after finished
I1111 08:39:58.046985 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring
I1111 08:39:58.046884 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I1111 08:39:58.047071 1 shared_informer.go:247] Caches are synced for PV protection
I1111 08:39:58.047773 1 shared_informer.go:247] Caches are synced for service account
I1111 08:39:58.072003 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator
W1111 08:39:58.153870 1 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I1111 08:39:58.188963 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving
I1111 08:39:58.190935 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client
I1111 08:39:58.192260 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client
I1111 08:39:58.193377 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown
I1111 08:39:58.200042 1 shared_informer.go:247] Caches are synced for node
I1111 08:39:58.200099 1 range_allocator.go:172] Starting range CIDR allocator
I1111 08:39:58.200113 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
I1111 08:39:58.200130 1 shared_informer.go:247] Caches are synced for cidrallocator
I1111 08:39:58.202434 1 shared_informer.go:247] Caches are synced for certificate-csrapproving
I1111 08:39:58.211384 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I1111 08:39:58.217060 1 shared_informer.go:247] Caches are synced for GC
I1111 08:39:58.225990 1 shared_informer.go:247] Caches are synced for daemon sets
I1111 08:39:58.230081 1 shared_informer.go:247] Caches are synced for PVC protection
I1111 08:39:58.238449 1 shared_informer.go:247] Caches are synced for TTL
I1111 08:39:58.245102 1 shared_informer.go:247] Caches are synced for persistent volume
I1111 08:39:58.246553 1 shared_informer.go:247] Caches are synced for ReplicationController
I1111 08:39:58.251053 1 shared_informer.go:247] Caches are synced for endpoint
I1111 08:39:58.252369 1 shared_informer.go:247] Caches are synced for resource quota
I1111 08:39:58.257609 1 shared_informer.go:247] Caches are synced for taint
I1111 08:39:58.257676 1 node_lifecycle_controller.go:1398] Initializing eviction metric for zone:
I1111 08:39:58.257694 1 taint_manager.go:187] "Starting NoExecuteTaintManager"
W1111 08:39:58.257728 1 node_lifecycle_controller.go:1013] Missing timestamp for Node minikube. Assuming now as a timestamp.
I1111 08:39:58.257760 1 node_lifecycle_controller.go:1164] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1111 08:39:58.257806 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I1111 08:39:58.259704 1 shared_informer.go:247] Caches are synced for ReplicaSet
I1111 08:39:58.263680 1 shared_informer.go:247] Caches are synced for attach detach
I1111 08:39:58.266321 1 shared_informer.go:247] Caches are synced for disruption
I1111 08:39:58.266346 1 disruption.go:371] Sending events to api server.
I1111 08:39:58.269197 1 shared_informer.go:247] Caches are synced for resource quota
I1111 08:39:58.273512 1 shared_informer.go:247] Caches are synced for stateful set
I1111 08:39:58.278750 1 shared_informer.go:247] Caches are synced for ephemeral
I1111 08:39:58.287039 1 shared_informer.go:247] Caches are synced for deployment
I1111 08:39:58.287099 1 shared_informer.go:247] Caches are synced for HPA
I1111 08:39:58.291523 1 shared_informer.go:247] Caches are synced for endpoint_slice
I1111 08:39:58.297873 1 shared_informer.go:247] Caches are synced for job
I1111 08:39:58.675463 1 shared_informer.go:247] Caches are synced for garbage collector
I1111 08:39:58.694975 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vv4gk"
I1111 08:39:58.695683 1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6848p"
I1111 08:39:58.722083 1 shared_informer.go:247] Caches are synced for garbage collector
I1111 08:39:58.722094 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I1111 08:39:58.810712 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-78fcd69978 to 2"
I1111 08:39:58.845733 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-78fcd69978 to 1"
I1111 08:39:58.953239 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-v5djx"
I1111 08:39:58.957107 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-78fcd69978-rnpww"
I1111 08:39:58.968079 1 event.go:291] "Event occurred" object="kube-system/coredns-78fcd69978" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-78fcd69978-v5djx"
I1111 08:40:03.257924 1 node_lifecycle_controller.go:1191] Controller detected that some Nodes are Ready. Exiting master disruption mode.
*
* ==> kube-scheduler [7c14e2730cf7579304c0a044b5b8c35d7c70db76956ff91e9c6bd753eb43b3b1] <==
* I1111 08:39:40.387189 1 serving.go:347] Generated self-signed cert in-memory
W1111 08:39:43.149965 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W1111 08:39:43.149990 1 authentication.go:345] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W1111 08:39:43.149999 1 authentication.go:346] Continuing without authentication configuration. This may treat all requests as anonymous.
W1111 08:39:43.150004 1 authentication.go:347] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I1111 08:39:43.158945 1 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I1111 08:39:43.158973 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I1111 08:39:43.159119 1 secure_serving.go:200] Serving securely on 127.0.0.1:10259
I1111 08:39:43.159152 1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
E1111 08:39:43.159841 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1111 08:39:43.160065 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:43.162944 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1111 08:39:43.163412 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1111 08:39:43.163491 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1111 08:39:43.163546 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:43.163692 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:43.163746 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1111 08:39:43.163785 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1111 08:39:43.163796 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1111 08:39:43.163828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1111 08:39:43.163890 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1111 08:39:43.163896 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:43.163923 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1111 08:39:43.164065 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1111 08:39:43.995419 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E1111 08:39:44.043464 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:44.088958 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E1111 08:39:44.161270 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E1111 08:39:44.162803 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E1111 08:39:44.185443 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E1111 08:39:44.190693 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E1111 08:39:44.349467 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:44.395168 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:44.410757 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E1111 08:39:44.448400 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E1111 08:39:44.455311 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E1111 08:39:44.489620 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E1111 08:39:44.559758 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
E1111 08:39:44.570601 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E1111 08:39:45.782969 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
E1111 08:39:45.783086 1 plugin.go:138] "getting namespace, assuming empty set of namespace labels" err="namespace \"kube-system\" not found" namespace="kube-system"
I1111 08:39:47.159958 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
*
* ==> kubelet <==
* -- Logs begin at Thu 2021-11-11 08:39:25 UTC, end at Thu 2021-11-11 08:49:15 UTC. --
Nov 11 08:48:06 minikube kubelet[1175]: I1111 08:48:06.866086 1175 scope.go:110] "RemoveContainer" containerID="69cf82bf72d4a6468564d7b3090770f805d21a64d980871007e8350df879ef5b"
Nov 11 08:48:06 minikube kubelet[1175]: E1111 08:48:06.866670 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b883d1e7-308f-4407-addf-816d3107afce)\"" pod="kube-system/storage-provisioner" podUID=b883d1e7-308f-4407-addf-816d3107afce
Nov 11 08:48:08 minikube kubelet[1175]: E1111 08:48:08.839653 1175 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(45420ce1a1e3bd8bf416b168908ce3ec61b8448a7aa4fa3dd0be64fd0a408fd6): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
Nov 11 08:48:08 minikube kubelet[1175]: E1111 08:48:08.839755 1175 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(45420ce1a1e3bd8bf416b168908ce3ec61b8448a7aa4fa3dd0be64fd0a408fd6): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:08 minikube kubelet[1175]: E1111 08:48:08.839820 1175 kuberuntime_manager.go:818] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(45420ce1a1e3bd8bf416b168908ce3ec61b8448a7aa4fa3dd0be64fd0a408fd6): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:08 minikube kubelet[1175]: E1111 08:48:08.839919 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(45420ce1a1e3bd8bf416b168908ce3ec61b8448a7aa4fa3dd0be64fd0a408fd6): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \\\"crio\\\": failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-78fcd69978-rnpww" podUID=4446202f-a6a4-466b-bfda-b6cfbc909d43
Nov 11 08:48:13 minikube kubelet[1175]: E1111 08:48:13.973456 1175 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = container create failed: time=\"2021-11-11T08:48:13Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" podSandboxID="0869e5ba1e4df1957eb874355af1028009ed064be4e2580447fd36d36c209d23"
Nov 11 08:48:13 minikube kubelet[1175]: E1111 08:48:13.973547 1175 kuberuntime_manager.go:898] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.22.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vv4gk_kube-system(b25d1a55-5cf6-452f-bcf2-f4c499ceb767): CreateContainerError: container create failed: time="2021-11-11T08:48:13Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
Nov 11 08:48:13 minikube kubelet[1175]: E1111 08:48:13.973579 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2021-11-11T08:48:13Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-vv4gk" podUID=b25d1a55-5cf6-452f-bcf2-f4c499ceb767
Nov 11 08:48:14 minikube kubelet[1175]: E1111 08:48:14.381043 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache]"
Nov 11 08:48:16 minikube kubelet[1175]: I1111 08:48:16.328886 1175 scope.go:110] "RemoveContainer" containerID="61f2a6753334aa7fe6f300f13683c541fb95b374d7fd9dd5842c9f569dde186d"
Nov 11 08:48:16 minikube kubelet[1175]: I1111 08:48:16.329245 1175 scope.go:110] "RemoveContainer" containerID="f573b525149556b57e8182a5d8ae2fea25a232b2cc9a6621119778ae1f3c07ce"
Nov 11 08:48:16 minikube kubelet[1175]: E1111 08:48:16.330365 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-6848p_kube-system(8310e0ee-e45a-46ad-bb59-cdea08b0b146)\"" pod="kube-system/kindnet-6848p" podUID=8310e0ee-e45a-46ad-bb59-cdea08b0b146
Nov 11 08:48:17 minikube kubelet[1175]: I1111 08:48:17.866214 1175 scope.go:110] "RemoveContainer" containerID="69cf82bf72d4a6468564d7b3090770f805d21a64d980871007e8350df879ef5b"
Nov 11 08:48:17 minikube kubelet[1175]: E1111 08:48:17.866807 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b883d1e7-308f-4407-addf-816d3107afce)\"" pod="kube-system/storage-provisioner" podUID=b883d1e7-308f-4407-addf-816d3107afce
Nov 11 08:48:24 minikube kubelet[1175]: W1111 08:48:24.421868 1175 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
Nov 11 08:48:24 minikube kubelet[1175]: W1111 08:48:24.430903 1175 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
Nov 11 08:48:24 minikube kubelet[1175]: E1111 08:48:24.434098 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache]"
Nov 11 08:48:24 minikube kubelet[1175]: E1111 08:48:24.881386 1175 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(b8ebf337abeca7de02a82a6d1a886584e463d4ac6a08c9f5f4297613195c3839): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
Nov 11 08:48:24 minikube kubelet[1175]: E1111 08:48:24.881571 1175 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(b8ebf337abeca7de02a82a6d1a886584e463d4ac6a08c9f5f4297613195c3839): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:24 minikube kubelet[1175]: E1111 08:48:24.881675 1175 kuberuntime_manager.go:818] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(b8ebf337abeca7de02a82a6d1a886584e463d4ac6a08c9f5f4297613195c3839): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:24 minikube kubelet[1175]: E1111 08:48:24.881901 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(b8ebf337abeca7de02a82a6d1a886584e463d4ac6a08c9f5f4297613195c3839): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \\\"crio\\\": failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-78fcd69978-rnpww" podUID=4446202f-a6a4-466b-bfda-b6cfbc909d43
Nov 11 08:48:29 minikube kubelet[1175]: E1111 08:48:29.073869 1175 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = container create failed: time=\"2021-11-11T08:48:29Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" podSandboxID="0869e5ba1e4df1957eb874355af1028009ed064be4e2580447fd36d36c209d23"
Nov 11 08:48:29 minikube kubelet[1175]: E1111 08:48:29.073967 1175 kuberuntime_manager.go:898] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.22.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vv4gk_kube-system(b25d1a55-5cf6-452f-bcf2-f4c499ceb767): CreateContainerError: container create failed: time="2021-11-11T08:48:29Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
Nov 11 08:48:29 minikube kubelet[1175]: E1111 08:48:29.074000 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2021-11-11T08:48:29Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-vv4gk" podUID=b25d1a55-5cf6-452f-bcf2-f4c499ceb767
Nov 11 08:48:30 minikube kubelet[1175]: I1111 08:48:30.866474 1175 scope.go:110] "RemoveContainer" containerID="f573b525149556b57e8182a5d8ae2fea25a232b2cc9a6621119778ae1f3c07ce"
Nov 11 08:48:30 minikube kubelet[1175]: E1111 08:48:30.867306 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kindnet-cni pod=kindnet-6848p_kube-system(8310e0ee-e45a-46ad-bb59-cdea08b0b146)\"" pod="kube-system/kindnet-6848p" podUID=8310e0ee-e45a-46ad-bb59-cdea08b0b146
Nov 11 08:48:32 minikube kubelet[1175]: I1111 08:48:32.866306 1175 scope.go:110] "RemoveContainer" containerID="69cf82bf72d4a6468564d7b3090770f805d21a64d980871007e8350df879ef5b"
Nov 11 08:48:32 minikube kubelet[1175]: E1111 08:48:32.866862 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(b883d1e7-308f-4407-addf-816d3107afce)\"" pod="kube-system/storage-provisioner" podUID=b883d1e7-308f-4407-addf-816d3107afce
Nov 11 08:48:34 minikube kubelet[1175]: W1111 08:48:34.479559 1175 conversion.go:111] Could not get instant cpu stats: cumulative stats decrease
Nov 11 08:48:34 minikube kubelet[1175]: E1111 08:48:34.481167 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache]"
Nov 11 08:48:37 minikube kubelet[1175]: E1111 08:48:37.987725 1175 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(cc40974613b03c9a3fe32ceb24c22f212678e63a4986d4eeeb4d61c93beed28e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
Nov 11 08:48:37 minikube kubelet[1175]: E1111 08:48:37.987858 1175 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(cc40974613b03c9a3fe32ceb24c22f212678e63a4986d4eeeb4d61c93beed28e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:37 minikube kubelet[1175]: E1111 08:48:37.987918 1175 kuberuntime_manager.go:818] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(cc40974613b03c9a3fe32ceb24c22f212678e63a4986d4eeeb4d61c93beed28e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:37 minikube kubelet[1175]: E1111 08:48:37.988046 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(cc40974613b03c9a3fe32ceb24c22f212678e63a4986d4eeeb4d61c93beed28e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \\\"crio\\\": failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-78fcd69978-rnpww" podUID=4446202f-a6a4-466b-bfda-b6cfbc909d43
Nov 11 08:48:43 minikube kubelet[1175]: E1111 08:48:42.999989 1175 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = container create failed: time=\"2021-11-11T08:48:42Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" podSandboxID="0869e5ba1e4df1957eb874355af1028009ed064be4e2580447fd36d36c209d23"
Nov 11 08:48:43 minikube kubelet[1175]: E1111 08:48:43.000087 1175 kuberuntime_manager.go:898] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.22.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vv4gk_kube-system(b25d1a55-5cf6-452f-bcf2-f4c499ceb767): CreateContainerError: container create failed: time="2021-11-11T08:48:42Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
Nov 11 08:48:43 minikube kubelet[1175]: E1111 08:48:43.000121 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2021-11-11T08:48:42Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-vv4gk" podUID=b25d1a55-5cf6-452f-bcf2-f4c499ceb767
Nov 11 08:48:44 minikube kubelet[1175]: E1111 08:48:44.534274 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache]"
Nov 11 08:48:44 minikube kubelet[1175]: I1111 08:48:44.866200 1175 scope.go:110] "RemoveContainer" containerID="f573b525149556b57e8182a5d8ae2fea25a232b2cc9a6621119778ae1f3c07ce"
Nov 11 08:48:47 minikube kubelet[1175]: I1111 08:48:47.866133 1175 scope.go:110] "RemoveContainer" containerID="69cf82bf72d4a6468564d7b3090770f805d21a64d980871007e8350df879ef5b"
Nov 11 08:48:47 minikube kubelet[1175]: W1111 08:48:47.905773 1175 container.go:586] Failed to update stats for container "/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/crio-21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5.scope": /sys/fs/cgroup/cpuset/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/crio-21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5.scope/cpuset.cpus found to be empty, continuing to push stats
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.313637 1175 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.313794 1175 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.313890 1175 kuberuntime_manager.go:818] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.314105 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(ca121c4331baf760f5af1149bf377a55626a4585610411cf454ab133564aea4e): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \\\"crio\\\": failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-78fcd69978-rnpww" podUID=4446202f-a6a4-466b-bfda-b6cfbc909d43
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.585861 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/crio-21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5.scope\": RecentStats: unable to find data in memory cache]"
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.996274 1175 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = container create failed: time=\"2021-11-11T08:48:54Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" podSandboxID="0869e5ba1e4df1957eb874355af1028009ed064be4e2580447fd36d36c209d23"
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.996360 1175 kuberuntime_manager.go:898] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.22.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vv4gk_kube-system(b25d1a55-5cf6-452f-bcf2-f4c499ceb767): CreateContainerError: container create failed: time="2021-11-11T08:48:54Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
Nov 11 08:48:54 minikube kubelet[1175]: E1111 08:48:54.996395 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2021-11-11T08:48:54Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-vv4gk" podUID=b25d1a55-5cf6-452f-bcf2-f4c499ceb767
Nov 11 08:49:04 minikube kubelet[1175]: E1111 08:49:04.642291 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache]"
Nov 11 08:49:05 minikube kubelet[1175]: W1111 08:49:05.438874 1175 container.go:586] Failed to update stats for container "/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope": /sys/fs/cgroup/cpuset/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/cpuset.cpus found to be empty, continuing to push stats
Nov 11 08:49:05 minikube kubelet[1175]: E1111 08:49:05.981089 1175 remote_runtime.go:228] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = container create failed: time=\"2021-11-11T08:49:05Z\" level=error msg=\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\"\n" podSandboxID="0869e5ba1e4df1957eb874355af1028009ed064be4e2580447fd36d36c209d23"
Nov 11 08:49:05 minikube kubelet[1175]: E1111 08:49:05.981183 1175 kuberuntime_manager.go:898] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.22.3,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ckblp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vv4gk_kube-system(b25d1a55-5cf6-452f-bcf2-f4c499ceb767): CreateContainerError: container create failed: time="2021-11-11T08:49:05Z" level=error msg="container_linux.go:380: starting container process caused: apply caps: operation not permitted"
Nov 11 08:49:05 minikube kubelet[1175]: E1111 08:49:05.981218 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerError: \"container create failed: time=\\\"2021-11-11T08:49:05Z\\\" level=error msg=\\\"container_linux.go:380: starting container process caused: apply caps: operation not permitted\\\"\\n\"" pod="kube-system/kube-proxy-vv4gk" podUID=b25d1a55-5cf6-452f-bcf2-f4c499ceb767
Nov 11 08:49:10 minikube kubelet[1175]: E1111 08:49:10.984108 1175 remote_runtime.go:116] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied"
Nov 11 08:49:10 minikube kubelet[1175]: E1111 08:49:10.984235 1175 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:49:10 minikube kubelet[1175]: E1111 08:49:10.984309 1175 kuberuntime_manager.go:818] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \"crio\": failed to set bridge addr: could not add IP address to \"cni0\": permission denied" pod="kube-system/coredns-78fcd69978-rnpww"
Nov 11 08:49:10 minikube kubelet[1175]: E1111 08:49:10.984471 1175 pod_workers.go:836] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-78fcd69978-rnpww_kube-system(4446202f-a6a4-466b-bfda-b6cfbc909d43)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_coredns-78fcd69978-rnpww_kube-system_4446202f-a6a4-466b-bfda-b6cfbc909d43_0(fd2916aaebf7654f741e42faa6c710e113a2c5ef6d03da3c7da2f71cde95ff55): error adding pod kube-system_coredns-78fcd69978-rnpww to CNI network \\\"crio\\\": failed to set bridge addr: could not add IP address to \\\"cni0\\\": permission denied\"" pod="kube-system/coredns-78fcd69978-rnpww" podUID=4446202f-a6a4-466b-bfda-b6cfbc909d43
Nov 11 08:49:14 minikube kubelet[1175]: E1111 08:49:14.695155 1175 cadvisor_stats_provider.go:415] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope/system.slice/docker-9b158304af40ab10ec7f5c7ce442b0615eb6de2aac9acfbf553be5355039a49e.scope\": RecentStats: unable to find data in memory cache]"
*
* ==> storage-provisioner [21e6a470cefeee54c9e2e5cf3ddef00e522f5208e8ae5e3649c242f6f5b19ba5] <==
* I1111 08:48:47.977472 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
*
* ==> storage-provisioner [69cf82bf72d4a6468564d7b3090770f805d21a64d980871007e8350df879ef5b] <==
* I1111 08:45:32.987869 1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
F1111 08:46:02.990169 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 15 (7 by maintainers)
Commits related to this issue
- ci: Remove workaround to test 'cri-o' in GH actions There was an issue running minikube with --container-runtime=cri-o in GitHub actions due toversion older than v0.23.0 https://github.com/kubernetes... — committed to mqasimsarfraz/inspektor-gadget by mqasimsarfraz a year ago
- ci: Remove workaround to test 'cri-o' in GH actions There was an issue running minikube with --container-runtime=cri-o in GitHub actions due to docker version older than v0.23.0 https://github.com/ku... — committed to inspektor-gadget/inspektor-gadget by mqasimsarfraz a year ago
- ci: Remove workaround to test 'cri-o' in GH actions There was an issue running minikube with --container-runtime=cri-o in GitHub actions due to docker version older than v0.23.0 https://github.com/ku... — committed to matthyx/inspektor-gadget by mqasimsarfraz a year ago
Hi all, I deep dived into the issue and found out the root cause is a missing capabilities in the minikube
dockercontainer.cri-oupdated the capability list and if the container in whichcri-owill eventually be launched doesnβt have those capabilities inCapBndwe are expected to getoperation not permittedwhile applying those additional capabilities. You can do a quick test by running following and if it works fine you are good:Or you can check the capabilities in the container by:
we can see
CAP_PERFMON,CAP_BPF, andCAP_CHECKPOINT_RESTOREare missing from the list. This was on the host running docker20.10.18and I figured out it was missing the fix. The fix is already merged to master so upgrading to docker22.06.0-beta.0solves the issue:Currently, I see running a test docker release as an only workaround. I will add comment to the fix that it should be back ported to stable release.
Sorry for the delay, thanks for the deep dive on the issue, it makes complete sense why itβs not working. We should warn users about this when they try to start minikube with docker and cri-o and link them to this issue. Unfortunately it doesnβt seem like thereβs anything else we can do from our side. Once Docker makes the version with the fix GA we can update the message to suggest users to update to the latest Docker version.