minikube: Exiting due to K8S_KUBELET_NOT_RUNNING

Steps to reproduce the issue:

  1. minikube start

Full output of minikube logs command:

  • ==> Docker <==

  • – Logs begin at Thu 2021-07-08 09:23:44 UTC, end at Thu 2021-07-08 09:33:41 UTC. – Jul 08 09:23:44 minikube systemd[1]: Starting Docker Application Container Engine… Jul 08 09:23:44 minikube dockerd[148]: time=“2021-07-08T09:23:44.998760636Z” level=info msg=“Starting up” Jul 08 09:23:44 minikube dockerd[148]: time=“2021-07-08T09:23:44.999619832Z” level=info msg=“parsed scheme: "unix"” module=grpc Jul 08 09:23:44 minikube dockerd[148]: time=“2021-07-08T09:23:44.999639457Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc Jul 08 09:23:44 minikube dockerd[148]: time=“2021-07-08T09:23:44.999659502Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}” module=grpc Jul 08 09:23:44 minikube dockerd[148]: time=“2021-07-08T09:23:44.999668581Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.000894168Z” level=info msg=“parsed scheme: "unix"” module=grpc Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.000921896Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.000941940Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}” module=grpc Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.000951439Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.033814395Z” level=info msg=“Loading containers: start.” Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.110522049Z” level=info msg=“Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address” Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.185317850Z” level=info msg=“Loading containers: done.” Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.204623430Z” level=info msg=“Docker daemon” commit=b0f5bc3 graphdriver(s)=btrfs version=20.10.7 Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.204710383Z” level=info msg=“Daemon has completed initialization” Jul 08 09:23:45 minikube systemd[1]: Started Docker Application Container Engine. Jul 08 09:23:45 minikube dockerd[148]: time=“2021-07-08T09:23:45.240065021Z” level=info msg=“API listen on /run/docker.sock” Jul 08 09:23:47 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won’t be resumed. Jul 08 09:23:48 minikube systemd[1]: Stopping Docker Application Container Engine… Jul 08 09:23:48 minikube dockerd[148]: time=“2021-07-08T09:23:48.056899649Z” level=info msg=“Processing signal ‘terminated’” Jul 08 09:23:48 minikube dockerd[148]: time=“2021-07-08T09:23:48.058097719Z” level=info msg=“stopping event stream following graceful shutdown” error=“<nil>” module=libcontainerd namespace=moby Jul 08 09:23:48 minikube dockerd[148]: time=“2021-07-08T09:23:48.058785872Z” level=info msg=“Daemon shutdown complete” Jul 08 09:23:48 minikube systemd[1]: docker.service: Succeeded. Jul 08 09:23:48 minikube systemd[1]: Stopped Docker Application Container Engine. Jul 08 09:23:48 minikube systemd[1]: Starting Docker Application Container Engine… Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.107522303Z” level=info msg=“Starting up” Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.108825275Z” level=info msg=“parsed scheme: "unix"” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.108855796Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.108892952Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.108922495Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.109826390Z” level=info msg=“parsed scheme: "unix"” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.109852650Z” level=info msg=“scheme "unix" not registered, fallback to default scheme” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.109872765Z” level=info msg=“ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>}” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.109884638Z” level=info msg=“ClientConn switching balancer to "pick_first"” module=grpc Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.123476795Z” level=info msg=“Loading containers: start.” Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.303878277Z” level=info msg=“Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address” Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.364591769Z” level=info msg=“Loading containers: done.” Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.372909600Z” level=info msg=“Docker daemon” commit=b0f5bc3 graphdriver(s)=btrfs version=20.10.7 Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.373007379Z” level=info msg=“Daemon has completed initialization” Jul 08 09:23:48 minikube systemd[1]: Started Docker Application Container Engine. Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.397703432Z” level=info msg=“API listen on [::]:2376” Jul 08 09:23:48 minikube dockerd[392]: time=“2021-07-08T09:23:48.400915954Z” level=info msg=“API listen on /var/run/docker.sock”

  • ==> container status <==

  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID

  • ==> describe nodes <==

  • ==> dmesg <==

  • [ +0.016734] FAT-fs (sda1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck. [Jul 4 19:47] NFSD: Using UMH upcall client tracking operations. [Jul 5 16:27] Chrome_ChildIOT invoked oom-killer: gfp_mask=0x100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=300 [ +0.000004] CPU: 1 PID: 25301 Comm: Chrome_ChildIOT Not tainted 5.10.0-7-amd64 #1 Debian 5.10.40-1 [ +0.000001] Hardware name: HUAWEI KLVL-WXX9/KLVL-WXX9-PCB, BIOS 1.06 09/14/2020 [ +0.000000] Call Trace: [ +0.000007] dump_stack+0x6b/0x83 [ +0.000002] dump_header+0x4a/0x1f0 [ +0.000002] oom_kill_process.cold+0xb/0x10 [ +0.000002] out_of_memory+0x1bd/0x500 [ +0.000002] __alloc_pages_slowpath.constprop.0+0xb8c/0xc60 [ +0.000002] __alloc_pages_nodemask+0x2da/0x310 [ +0.000001] pagecache_get_page+0x16d/0x380 [ +0.000002] filemap_fault+0x69e/0x900 [ +0.000002] ? filemap_map_pages+0x223/0x410 [ +0.000001] __do_fault+0x36/0x120 [ +0.000002] handle_mm_fault+0x118e/0x1b80 [ +0.000003] do_user_addr_fault+0x1bb/0x3f0 [ +0.000002] ? _copy_to_user+0x1c/0x30 [ +0.000002] exc_page_fault+0x7b/0x160 [ +0.000002] ? asm_exc_page_fault+0x8/0x30 [ +0.000001] asm_exc_page_fault+0x1e/0x30 [ +0.000001] RIP: 0033:0x564776cdafff [ +0.000004] Code: Unable to access opcode bytes at RIP 0x564776cdafd5. [ +0.000001] RSP: 002b:00007f6de45b80d0 EFLAGS: 00010246 [ +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000149f8a85dd78 [ +0.000000] RDX: 00007f6de45b8180 RSI: 0000000000000000 RDI: 0000000000000000 [ +0.000001] RBP: 00007f6de45b8170 R08: 0000149f8a85dd00 R09: 0000149f8a85dd00 [ +0.000001] R10: 00007ffdff5df000 R11: 0000000000000286 R12: 0000149f8a85da80 [ +0.000000] R13: 0000149f8bdc2628 R14: 0000000000000000 R15: 0000149f8a85dd68 [ +0.000002] Mem-Info: [ +0.000004] active_anon:1581 inactive_anon:3639196 isolated_anon:0 active_file:279 inactive_file:4213 isolated_file:376 unevictable:3018 dirty:48 writeback:0 slab_reclaimable:18688 slab_unreclaimable:52330 mapped:116625 shmem:122041 pagetables:21825 bounce:0 free:32819 free_pcp:6283 free_cma:0 [ +0.000002] Node 0 active_anon:6324kB inactive_anon:14556784kB active_file:1116kB inactive_file:16852kB unevictable:12072kB isolated(anon):0kB isolated(file):1504kB mapped:466500kB dirty:192kB writeback:0kB shmem:488164kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 2007040kB writeback_tmp:0kB kernel_stack:25536kB all_unreclaimable? no [ +0.000001] Node 0 DMA free:15904kB min:68kB low:84kB high:100kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15904kB mlocked:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB [ +0.000002] lowmem_reserve[]: 0 3052 15274 15274 15274 [ +0.000002] Node 0 DMA32 free:58848kB min:13492kB low:16864kB high:20236kB reserved_highatomic:0KB active_anon:0kB inactive_anon:3014072kB active_file:800kB inactive_file:2032kB unevictable:0kB writepending:20kB present:3271116kB managed:3270552kB mlocked:0kB pagetables:1964kB bounce:0kB free_pcp:9572kB local_pcp:696kB free_cma:0kB [ +0.000002] lowmem_reserve[]: 0 0 12221 12221 12221 [ +0.000002] Node 0 Normal free:56524kB min:175560kB low:189064kB high:202568kB reserved_highatomic:2048KB active_anon:6324kB inactive_anon:11542712kB active_file:1764kB inactive_file:14948kB unevictable:12072kB writepending:172kB present:12832000kB managed:12519996kB mlocked:12072kB pagetables:85336kB bounce:0kB free_pcp:15652kB local_pcp:1312kB free_cma:0kB [ +0.000002] lowmem_reserve[]: 0 0 0 0 0 [ +0.000002] Node 0 DMA: 24kB (U) 18kB (U) 116kB (U) 232kB (U) 164kB (U) 1128kB (U) 1256kB (U) 0512kB 11024kB (U) 12048kB (M) 34096kB (M) = 15904kB [ +0.000007] Node 0 DMA32: 354kB (UME) 388kB (UME) 4116kB (UE) 18332kB (UE) 19064kB (UME) 134128kB (UE) 82256kB (UE) 2512kB (M) 11024kB (M) 02048kB 04096kB = 59308kB [ +0.000007] Node 0 Normal: 7984kB (UMEH) 17468kB (UMEH) 148916kB (UEH) 50332kB (UEH) 464kB (UH) 0128kB 0256kB 0512kB 01024kB 02048kB 0*4096kB = 57336kB [ +0.000008] 129421 total pagecache pages [ +0.000000] 0 pages in swap cache [ +0.000001] Swap cache stats: add 0, delete 0, find 0/0 [ +0.000000] Free swap = 0kB [ +0.000001] Total swap = 0kB [ +0.000000] 4029777 pages RAM [ +0.000001] 0 pages HighMem/MovableOnly [ +0.000000] 78164 pages reserved [ +0.000000] 0 pages hwpoisoned [ +0.000271] Out of memory: Killed process 4917 (Web Content) total-vm:39932412kB, anon-rss:8848848kB, file-rss:0kB, shmem-rss:62100kB, UID:1000 pgtables:31076kB oom_score_adj:0 [Jul 6 15:49] kauditd_printk_skb: 14 callbacks suppressed [Jul 6 15:51] kauditd_printk_skb: 8 callbacks suppressed [Jul 7 06:15] psi: inconsistent task state! task=610341:gnome-control-c cpu=8 psi_flags=0 clear=1 set=0

  • ==> kernel <==

  • 09:33:41 up 5 days, 22:43, 0 users, load average: 1.38, 0.65, 0.48 Linux minikube 5.10.0-7-amd64 #1 SMP Debian 5.10.40-1 (2021-05-28) x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME=“Ubuntu 20.04.2 LTS”

  • ==> kubelet <==

  • – Logs begin at Thu 2021-07-08 09:23:44 UTC, end at Thu 2021-07-08 09:33:41 UTC. – Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.344583 33857 kubelet_network_linux.go:56] “Initialized protocol iptables rules.” protocol=IPv4 Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.354613 33857 kubelet_network_linux.go:56] “Initialized protocol iptables rules.” protocol=IPv6 Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.354644 33857 status_manager.go:157] “Starting to sync pod status with apiserver” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.354668 33857 kubelet.go:1846] “Starting kubelet main sync loop” Jul 08 09:33:38 minikube kubelet[33857]: E0708 09:33:38.354719 33857 kubelet.go:1870] “Skipping pod synchronization” err=“[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]” Jul 08 09:33:38 minikube kubelet[33857]: E0708 09:33:38.355255 33857 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get “https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0”: dial tcp 192.168.49.2:8443: connect: connection refused Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356369 33857 client.go:86] parsed scheme: “unix” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356388 33857 client.go:86] scheme “unix” not registered, fallback to default scheme Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356414 33857 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.356425 33857 clientconn.go:948] ClientConn switching balancer to “pick_first” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403071 33857 cpu_manager.go:199] “Starting CPU manager” policy=“none” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403088 33857 cpu_manager.go:200] “Reconciling” reconcilePeriod=“10s” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403104 33857 state_mem.go:36] “Initialized new in-memory state store” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403196 33857 state_mem.go:88] “Updated default CPUSet” cpuSet=“” Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403207 33857 state_mem.go:96] “Updated CPUSet assignments” assignments=map[] Jul 08 09:33:38 minikube kubelet[33857]: I0708 09:33:38.403214 33857 policy_none.go:44] “None policy: Start” Jul 08 09:33:38 minikube kubelet[33857]: W0708 09:33:38.403234 33857 fs.go:588] stat failed on /dev/mapper/nvme0n1p3_crypt with error: no such file or directory Jul 08 09:33:38 minikube kubelet[33857]: E0708 09:33:38.403252 33857 kubelet.go:1384] “Failed to start ContainerManager” err=“failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 27 in cached partitions map” Jul 08 09:33:38 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 08 09:33:38 minikube systemd[1]: kubelet.service: Failed with result ‘exit-code’. Jul 08 09:33:39 minikube systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 120. Jul 08 09:33:39 minikube systemd[1]: Stopped kubelet: The Kubernetes Node Agent. Jul 08 09:33:39 minikube systemd[1]: Started kubelet: The Kubernetes Node Agent. Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.391576 34059 server.go:440] “Kubelet version” kubeletVersion=“v1.21.2” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.392133 34059 server.go:851] “Client rotation is on, will bootstrap in background” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.396240 34059 certificate_store.go:130] Loading cert/key pair from “/var/lib/kubelet/pki/kubelet-client-current.pem”. Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.398294 34059 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt Jul 08 09:33:39 minikube kubelet[34059]: W0708 09:33:39.398449 34059 manager.go:159] Cannot detect current cgroup on cgroup v2 Jul 08 09:33:39 minikube kubelet[34059]: W0708 09:33:39.469453 34059 fs.go:214] stat failed on /dev/mapper/nvme0n1p3_crypt with error: no such file or directory Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516652 34059 server.go:660] “–cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516881 34059 container_manager_linux.go:278] “Container manager verified user specified cgroup-root exists” cgroupRoot=[] Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516936 34059 container_manager_linux.go:283] “Creating Container Manager object based on Node Config” nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516951 34059 topology_manager.go:120] “Creating topology manager with policy per scope” topologyPolicyName=“none” topologyScopeName=“container” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516961 34059 container_manager_linux.go:314] “Initializing Topology Manager” policy=“none” scope=“container” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.516969 34059 container_manager_linux.go:319] “Creating device plugin manager” devicePluginEnabled=true Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.517027 34059 kubelet.go:307] “Using dockershim is deprecated, please consider using a full-fledged CRI implementation” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.517054 34059 client.go:78] “Connecting to docker on the dockerEndpoint” endpoint=“unix:///var/run/docker.sock” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.517066 34059 client.go:97] “Start docker client with request timeout” timeout=“2m0s” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.524936 34059 docker_service.go:566] “Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth” hairpinMode=promiscuous-bridge Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.524978 34059 docker_service.go:242] “Hairpin mode is set” hairpinMode=hairpin-veth Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.531424 34059 docker_service.go:257] “Docker cri networking managed by the network plugin” networkPluginName=“kubernetes.io/no-op” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.538875 34059 docker_service.go:264] “Docker Info” dockerInfo=&{ID:MFPM:VZAV:XQDQ:VCBD:DLJJ:VCKN:KBP5:4XYA:VUKJ:CNKR:Y6E4:7XEY Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:10 Driver:btrfs DriverStatus:[[Build Version Btrfs v5.4.1 ] [Library Version 102]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:25 OomKillDisable:false NGoroutines:35 SystemTime:2021-07-08T09:33:39.531799523Z LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:2 NEventsListener:0 KernelVersion:5.10.0-7-amd64 OperatingSystem:Ubuntu 20.04.2 LTS OSVersion:20.04 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00062e150 NCPU:12 MemTotal:16185806848 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy:control-plane.minikube.internal Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:20.10.7 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:d71fcd7d8303cbf684402823e425e9dd2e99285d Expected:d71fcd7d8303cbf684402823e425e9dd2e99285d} RuncCommit:{ID:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7 Expected:b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: DefaultAddressPools:[] Warnings:[]} Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.538897 34059 docker_service.go:277] “Setting cgroupDriver” cgroupDriver=“systemd” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.548983 34059 remote_runtime.go:62] parsed scheme: “” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549002 34059 remote_runtime.go:62] scheme “” not registered, fallback to default scheme Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549028 34059 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>} Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549038 34059 clientconn.go:948] ClientConn switching balancer to “pick_first” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549069 34059 remote_image.go:50] parsed scheme: “” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549075 34059 remote_image.go:50] scheme “” not registered, fallback to default scheme Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549083 34059 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>} Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549089 34059 clientconn.go:948] ClientConn switching balancer to “pick_first” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549152 34059 kubelet.go:404] “Attempting to sync node with API server” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549165 34059 kubelet.go:272] “Adding static pod path” path=“/etc/kubernetes/manifests” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549186 34059 kubelet.go:283] “Adding apiserver pod source” Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.549198 34059 apiserver.go:42] “Waiting for node sync before watching apiserver pods” Jul 08 09:33:39 minikube kubelet[34059]: E0708 09:33:39.549796 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get “https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0”: dial tcp 192.168.49.2:8443: connect: connection refused Jul 08 09:33:39 minikube kubelet[34059]: E0708 09:33:39.549830 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get “https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0”: dial tcp 192.168.49.2:8443: connect: connection refused Jul 08 09:33:39 minikube kubelet[34059]: I0708 09:33:39.557065 34059 kuberuntime_manager.go:222] “Container runtime initialized” containerRuntime=“docker” version=“20.10.7” apiVersion=“1.41.0” Jul 08 09:33:40 minikube kubelet[34059]: E0708 09:33:40.643187 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get “https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0”: dial tcp 192.168.49.2:8443: connect: connection refused Jul 08 09:33:40 minikube kubelet[34059]: E0708 09:33:40.782927 34059 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get “https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0”: dial tcp 192.168.49.2:8443: connect: connection refused

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 17 (9 by maintainers)

Most upvoted comments

@rafacouto I am curious does adding this option fix the issue ?

miniukube delete --all
minikube start --feature-gates="LocalStorageCapacityIsolation=false"

there is a PR https://github.com/kubernetes/minikube/pull/12990 that could fix this

You can always pass --preload=false into minikube start to skip any preload download and usage.

@rafacouto You are correct, we should probably exit the program if the user is using btrfs. What I’ll probably do it add the fatal, but also add a --force-btrfs flag incase the user really wants to try btrfs and will advertise the flag in the fatal message.