minikube: docker macOS: This control plane is not running! (state=Stopped)
I have tried to open a tunnel to use the LoadBalancer as described in the documentation. (https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/) Currently it fails on my MacBook.
The exact command to reproduce the issue:
$ minikube start --container-runtime=docker --driver=docker
$ minikube tunnel
The full output of the command that failed:
$ minikube start --container-runtime=docker --driver=docker đ minikube v1.9.0 on Darwin 10.14.6 âš Using the docker driver based on user configuration đ Pulling base image ⊠đ„ Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=4000MB (5948MB available) ⊠đł Preparing Kubernetes v1.18.0 on Docker 19.03.2 ⊠âȘ kubeadm.pod-network-cidr=10.244.0.0/16 đ Enabling addons: default-storageclass, storage-provisioner đ Done! kubectl is now configured to use âminikubeâ
$ minikube tunnel đ€· This control plane is not running! (state=Stopped) â This is unusual - you may want to investigate using âminikube logsâ đ To fix this, run: minikube start
The output of the minikube logs command:
==> Docker <== â Logs begin at Sat 2020-03-28 09:38:06 UTC, end at Sat 2020-03-28 09:41:14 UTC. â Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.465616600Zâ level=info msg=âloading plugin "io.containerd.service.v1.tasks-service"âŠâ type=io.containerd.service.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.465744804Zâ level=info msg=âloading plugin "io.containerd.internal.v1.restart"âŠâ type=io.containerd.internal.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.465889093Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.containers"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.465992204Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.content"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466068011Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.diff"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466140591Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.events"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466213423Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.healthcheck"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466291643Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.images"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466364839Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.leases"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466441731Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.namespaces"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466514690Zâ level=info msg=âloading plugin "io.containerd.internal.v1.opt"âŠâ type=io.containerd.internal.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466665718Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.snapshots"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466793300Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.tasks"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466887456Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.version"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.466961366Zâ level=info msg=âloading plugin "io.containerd.grpc.v1.introspection"âŠâ type=io.containerd.grpc.v1 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.467242373Zâ level=info msg=serving⊠address=/var/run/docker/containerd/containerd-debug.sock Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.467388174Zâ level=info msg=serving⊠address=/var/run/docker/containerd/containerd.sock.ttrpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.467619476Zâ level=info msg=serving⊠address=/var/run/docker/containerd/containerd.sock Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.467860326Zâ level=info msg=âcontainerd successfully booted in 0.042684sâ Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.473027087Zâ level=info msg=âpickfirstBalancer: HandleSubConnStateChange: 0xc0001300d0, READYâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.475899757Zâ level=info msg=âparsed scheme: "unix"â module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.476008023Zâ level=info msg=âscheme "unix" not registered, fallback to default schemeâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.476090212Zâ level=info msg=âccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] }â module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.476165162Zâ level=info msg=âClientConn switching balancer to "pick_first"â module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.476339363Zâ level=info msg=âpickfirstBalancer: HandleSubConnStateChange: 0xc0006665b0, CONNECTINGâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.476780598Zâ level=info msg=âpickfirstBalancer: HandleSubConnStateChange: 0xc0006665b0, READYâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.476416046Zâ level=info msg=âblockingPicker: the picked transport is not ready, loop back to repickâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.477595125Zâ level=info msg=âparsed scheme: "unix"â module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.477636157Zâ level=info msg=âscheme "unix" not registered, fallback to default schemeâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.477653036Zâ level=info msg=âccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] }â module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.477661956Zâ level=info msg=âClientConn switching balancer to "pick_first"â module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.477701728Zâ level=info msg=âpickfirstBalancer: HandleSubConnStateChange: 0xc000130ab0, CONNECTINGâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.478118907Zâ level=info msg=âpickfirstBalancer: HandleSubConnStateChange: 0xc000130ab0, READYâ module=grpc Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.481828312Zâ level=info msg=â[graphdriver] using prior storage driver: overlay2â Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.491798318Zâ level=info msg=âLoading containers: start.â Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.625413967Zâ level=info msg=âDefault bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP addressâ Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.684470479Zâ level=info msg=âLoading containers: done.â Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.706302751Zâ level=info msg=âDocker daemonâ commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2 Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.706395685Zâ level=info msg=âDaemon has completed initializationâ Mar 28 09:38:18 minikube systemd[1]: Started Docker Application Container Engine. Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.732315007Zâ level=info msg=âAPI listen on /var/run/docker.sockâ Mar 28 09:38:18 minikube dockerd[504]: time=â2020-03-28T09:38:18.733326173Zâ level=info msg=âAPI listen on [::]:2376â Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.399033579Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/3c4f3aac345dcafd53d2f0b47a22c028f032e4b5018dd7be2bba090579788683.sock debug=false pid=1643 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.429207998Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/a80ced283495b5a9fe979978727190584ccbff8c00b8df5ab97944598a386cd1.sock debug=false pid=1662 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.438550687Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/01925aeed00689a217f4b5ed448385a2f34c904e9b02e0378618eeb328f44972.sock debug=false pid=1669 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.439814411Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/a6834529cace5a9d99dce23b26f6e416d1c0903bab0ef97febfa3c27cf820c89.sock debug=false pid=1678 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.703189263Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/1980b36e17f2fa019851afe74fc5aa9eb09794046ffd7b50958c362d97e84789.sock debug=false pid=1792 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.737713517Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/ed31bcc2b81462de1b4cb4beac02f3523b139381e5108722a4542d08cd65af74.sock debug=false pid=1812 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.744742173Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/7caced97bdb36a1e1b3ce62578f116e80e67ddf42b972319e4eca704895954e7.sock debug=false pid=1816 Mar 28 09:38:35 minikube dockerd[504]: time=â2020-03-28T09:38:35.807439251Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/5728ee147335539cf9df33e3b00b12e483f1509ed05a304c74678fc716f5577b.sock debug=false pid=1857 Mar 28 09:39:02 minikube dockerd[504]: time=â2020-03-28T09:39:02.916872491Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/24a986dcd63f3e37c38c2c107317fbdf1c8ce7e07bcb585f612e2cf01aebd65a.sock debug=false pid=2789 Mar 28 09:39:02 minikube dockerd[504]: time=â2020-03-28T09:39:02.931445705Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/b2b3e6902b8faa44b57ddfa15cc1d9f3d0ce1291dcb0ef81f4b2070e892bec4f.sock debug=false pid=2801 Mar 28 09:39:03 minikube dockerd[504]: time=â2020-03-28T09:39:03.216035019Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/e4c32ac895b091f0ccce28c9858af808357da9f0b3fadc45e52027e9ca6879fe.sock debug=false pid=2870 Mar 28 09:39:03 minikube dockerd[504]: time=â2020-03-28T09:39:03.222698158Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/8914f15eee61ec2de24cfdfaedc3030d73f03c0ac25e0312ccc2a32ad4c33c94.sock debug=false pid=2877 Mar 28 09:39:04 minikube dockerd[504]: time=â2020-03-28T09:39:04.482886617Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/db9e6b306529b7ca4eb500ec6526f1ab873573fa4ffb479b9e0f996a0317c350.sock debug=false pid=3028 Mar 28 09:39:04 minikube dockerd[504]: time=â2020-03-28T09:39:04.496947832Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/4e07753b3bfdf494c4716a9dee58ce1d977097b62ebeb156cd53472627303828.sock debug=false pid=3042 Mar 28 09:39:05 minikube dockerd[504]: time=â2020-03-28T09:39:05.230774083Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/e0a5c77d6ccd1e294eaf68bd0b2d099e3dbdc6deed40557272a4da59d4d91153.sock debug=false pid=3168 Mar 28 09:39:05 minikube dockerd[504]: time=â2020-03-28T09:39:05.247032243Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/612e0f178c637f33c8b4ba422e1c47b1259dc110cf28d04653ed18c31bf77777.sock debug=false pid=3183 Mar 28 09:39:10 minikube dockerd[504]: time=â2020-03-28T09:39:10.798701611Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/472c518db7516c39702637337526ca18e652db0d56f5d5ec434794da2c84f2c7.sock debug=false pid=3400 Mar 28 09:39:10 minikube dockerd[504]: time=â2020-03-28T09:39:10.968559127Zâ level=info msg=âshim containerd-shim startedâ address=/containerd-shim/322dc68c84414a30ec572d1dbb12cd7a7a10dba45c9a5a3fae59239daf0dfae9.sock debug=false pid=3430
==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 0ef5a4adebde8 4689081edb103 2 minutes ago Running storage-provisioner 0 f3b145f1a0d95 d93a9441a43ca 67da37a9a360e 2 minutes ago Running coredns 0 0198de6532fe9 77658c66a2efd 67da37a9a360e 2 minutes ago Running coredns 0 7ddc004dc5f41 86eb97648b187 aa67fec7d7ef7 2 minutes ago Running kindnet-cni 0 1b95dcb8b2d6c 2586b1911767a 43940c34f24f3 2 minutes ago Running kube-proxy 0 b1b4292126068 222d9cc57a5f9 303ce5db0e90d 2 minutes ago Running etcd 0 a6fb8318314b0 b5f8ff08f2df6 d3e55153f52fb 2 minutes ago Running kube-controller-manager 0 22ff38cb17bf3 ed551651020b2 a31f78c7c8ce1 2 minutes ago Running kube-scheduler 0 11ca70aa0d5cc 9f597719254a9 74060cea7f704 2 minutes ago Running kube-apiserver 0 bbf9f4c1f7304
==> coredns [77658c66a2ef] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b
==> coredns [d93a9441a43c] <== .:53 [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7 CoreDNS-1.6.7 linux/amd64, go1.13.6, da7f65b
==> describe nodes <== Name: minikube Roles: master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=48fefd43444d2f8852f527c78f0141b377b1e42a minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2020_03_28T10_38_44_0700 minikube.k8s.io/version=v1.9.0 node-role.kubernetes.io/master= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 28 Mar 2020 09:38:40 +0000 Taints: <none> Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: <unset> RenewTime: Sat, 28 Mar 2020 09:41:14 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message
MemoryPressure False Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:54 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.17.0.2 Hostname: minikube Capacity: cpu: 4 ephemeral-storage: 61255492Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 6091056Ki pods: 110 Allocatable: cpu: 4 ephemeral-storage: 56453061334 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 5988656Ki pods: 110 System Info: Machine ID: ea7877d5ddb44eccbd34b60333376efb System UUID: bbadecaf-4cdf-4bdd-984c-81ac84fa3b6f Boot ID: ae5d8a5d-3015-4e39-9ee4-7d2967aed3a7 Kernel Version: 4.19.76-linuxkit OS Image: Ubuntu 19.10 Operating System: linux Architecture: amd64 Container Runtime Version: docker://19.3.2 Kubelet Version: v1.18.0 Kube-Proxy Version: v1.18.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
kube-system coredns-66bff467f8-txjxd 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 2m14s kube-system coredns-66bff467f8-zs5ms 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 2m14s kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m31s kube-system kindnet-zbng4 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 2m14s kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 2m31s kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 2m31s kube-system kube-proxy-t5bhd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m14s kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 2m31s kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m5s Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits
cpu 850m (21%) 100m (2%) memory 190Mi (3%) 390Mi (6%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) Events: Type Reason Age From Message
Normal Starting 2m41s kubelet, minikube Starting kubelet. Warning ImageGCFailed 2m41s kubelet, minikube failed to get imageFs info: unable to find data in memory cache Normal NodeHasSufficientMemory 2m41s (x3 over 2m41s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 2m41s (x3 over 2m41s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m41s (x2 over 2m41s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 2m41s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 2m31s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory Normal Starting 2m31s kubelet, minikube Starting kubelet. Normal NodeHasNoDiskPressure 2m31s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 2m31s kubelet, minikube Node minikube status is now: NodeHasSufficientPID Normal NodeNotReady 2m31s kubelet, minikube Node minikube status is now: NodeNotReady Normal NodeAllocatableEnforced 2m31s kubelet, minikube Updated Node Allocatable limit across pods Normal NodeReady 2m21s kubelet, minikube Node minikube status is now: NodeReady Normal Starting 2m12s kube-proxy, minikube Starting kube-proxy.
==> dmesg <== [Mar28 08:30] virtio-pci 0000:00:01.0: canât derive routing for PCI INT A [ +0.001161] virtio-pci 0000:00:01.0: PCI INT A: no GSI [ +0.003167] virtio-pci 0000:00:07.0: canât derive routing for PCI INT A [ +0.001123] virtio-pci 0000:00:07.0: PCI INT A: no GSI [ +0.051801] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds). [ +0.014755] ahci 0000:00:02.0: canât derive routing for PCI INT A [ +0.000851] ahci 0000:00:02.0: PCI INT A: no GSI [ +0.612184] i8042: Canât read CTR while initializing i8042 [ +0.000833] i8042: probe of i8042 failed with error -5 [ +0.001747] ata1.00: ATA Identify Device Log not supported [ +0.000001] ata1.00: Security Log not supported [ +0.003449] ata1.00: ATA Identify Device Log not supported [ +0.001311] ata1.00: Security Log not supported [ +0.002588] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184) [ +0.001368] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620) [ +0.210797] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.019778] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +3.721949] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive! [ +0.074232] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
==> etcd [222d9cc57a5f] <== [WARNING] Deprecated ââlogger=capnslogâ flag is set; use ââlogger=zapâ flag instead 2020-03-28 09:38:36.208919 I | etcdmain: etcd Version: 3.4.3 2020-03-28 09:38:36.209078 I | etcdmain: Git SHA: 3cf2f69b5 2020-03-28 09:38:36.209195 I | etcdmain: Go Version: go1.12.12 2020-03-28 09:38:36.209255 I | etcdmain: Go OS/Arch: linux/amd64 2020-03-28 09:38:36.209393 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4 [WARNING] Deprecated ââlogger=capnslogâ flag is set; use ââlogger=zapâ flag instead 2020-03-28 09:38:36.209975 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-03-28 09:38:36.211017 I | embed: name = minikube 2020-03-28 09:38:36.211148 I | embed: data dir = /var/lib/minikube/etcd 2020-03-28 09:38:36.211228 I | embed: member dir = /var/lib/minikube/etcd/member 2020-03-28 09:38:36.211300 I | embed: heartbeat = 100ms 2020-03-28 09:38:36.211370 I | embed: election = 1000ms 2020-03-28 09:38:36.211439 I | embed: snapshot count = 10000 2020-03-28 09:38:36.211515 I | embed: advertise client URLs = https://172.17.0.2:2379 2020-03-28 09:38:36.220648 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 switched to configuration voters=() raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became follower at term 0 raft2020/03/28 09:38:36 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became follower at term 1 raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620) 2020-03-28 09:38:36.234756 W | auth: simple token is not cryptographically signed 2020-03-28 09:38:36.238881 I | etcdserver: starting server⊠[version: 3.4.3, cluster version: to_be_decided] 2020-03-28 09:38:36.241727 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 2020-03-28 09:38:36.242053 I | embed: listening for metrics on http://127.0.0.1:2381 2020-03-28 09:38:36.242451 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10) 2020-03-28 09:38:36.243368 I | embed: listening for peers on 172.17.0.2:2380 raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620) 2020-03-28 09:38:36.244290 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 is starting a new election at term 1 raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became candidate at term 2 raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2 raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became leader at term 2 raft2020/03/28 09:38:36 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2 2020-03-28 09:38:36.826449 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f 2020-03-28 09:38:36.826758 I | embed: ready to serve client requests 2020-03-28 09:38:36.827032 I | etcdserver: setting up the initial cluster version to 3.4 2020-03-28 09:38:36.831215 I | embed: ready to serve client requests 2020-03-28 09:38:36.832483 I | embed: serving client requests on 127.0.0.1:2379 2020-03-28 09:38:36.833318 I | embed: serving client requests on 172.17.0.2:2379 2020-03-28 09:38:36.841334 N | etcdserver/membership: set the initial cluster version to 3.4 2020-03-28 09:38:36.897047 I | etcdserver/api: enabled capabilities for version 3.4 2020-03-28 09:38:57.492383 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-controller-manager" " with result ârange_response_count:1 size:506â took too long (218.835202ms) to execute 2020-03-28 09:38:57.492418 W | etcdserver: read-only range request "key:"/registry/minions/" range_end:"/registry/minions0" " with result ârange_response_count:1 size:5167â took too long (140.980503ms) to execute 2020-03-28 09:39:00.090050 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/replicaset-controller" " with result ârange_response_count:1 size:210â took too long (189.603111ms) to execute 2020-03-28 09:39:02.522064 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result ârange_response_count:1 size:480â took too long (389.983443ms) to execute 2020-03-28 09:39:02.522122 W | etcdserver: read-only range request "key:"/registry/minions/minikube" " with result ârange_response_count:1 size:5394â took too long (373.291648ms) to execute 2020-03-28 09:39:02.522417 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result ârange_response_count:1 size:257â took too long (189.432827ms) to execute 2020-03-28 09:39:02.726565 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result ârange_response_count:1 size:286â took too long (146.232111ms) to execute 2020-03-28 09:39:10.751531 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result ârange_response_count:1 size:480â took too long (148.581724ms) to execute
==> kernel <== 09:41:17 up 1:10, 0 users, load average: 0.36, 0.68, 0.55 Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME=âUbuntu 19.10â
==> kube-apiserver [9f597719254a] <== W0328 09:38:38.881376 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources. W0328 09:38:38.892702 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources. W0328 09:38:38.910286 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0328 09:38:38.920234 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources. W0328 09:38:38.944454 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0328 09:38:38.958766 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources. W0328 09:38:38.958808 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources. I0328 09:38:38.965492 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0328 09:38:38.965531 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota. I0328 09:38:38.966920 1 client.go:361] parsed scheme: âendpointâ I0328 09:38:38.966967 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0328 09:38:38.975166 1 client.go:361] parsed scheme: âendpointâ I0328 09:38:38.975215 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 <nil> 0 <nil>}] I0328 09:38:40.781098 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0328 09:38:40.781162 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0328 09:38:40.781400 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0328 09:38:40.781806 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0328 09:38:40.781798 1 secure_serving.go:178] Serving securely on [::]:8443 I0328 09:38:40.782100 1 apiservice_controller.go:94] Starting APIServiceRegistrationController I0328 09:38:40.782196 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0328 09:38:40.782108 1 available_controller.go:387] Starting AvailableConditionController I0328 09:38:40.782341 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0328 09:38:40.782624 1 controller.go:81] Starting OpenAPI AggregationController I0328 09:38:40.785075 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0328 09:38:40.785105 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller I0328 09:38:40.785133 1 autoregister_controller.go:141] Starting autoregister controller I0328 09:38:40.785137 1 cache.go:32] Waiting for caches to sync for autoregister controller I0328 09:38:40.785171 1 crd_finalizer.go:266] Starting CRDFinalizer E0328 09:38:40.789020 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: I0328 09:38:40.819119 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0328 09:38:40.819191 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0328 09:38:40.819472 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0328 09:38:40.819500 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister I0328 09:38:40.822145 1 controller.go:86] Starting OpenAPI controller I0328 09:38:40.822188 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0328 09:38:40.822208 1 naming_controller.go:291] Starting NamingConditionController I0328 09:38:40.822219 1 establishing_controller.go:76] Starting EstablishingController I0328 09:38:40.822227 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController I0328 09:38:40.822263 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0328 09:38:40.894207 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0328 09:38:40.894972 1 cache.go:39] Caches are synced for AvailableConditionController controller I0328 09:38:40.895687 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller I0328 09:38:40.895713 1 cache.go:39] Caches are synced for autoregister controller I0328 09:38:40.922665 1 shared_informer.go:230] Caches are synced for crd-autoregister I0328 09:38:41.782050 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0328 09:38:41.782432 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0328 09:38:41.792862 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000 I0328 09:38:41.800677 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000 I0328 09:38:41.800800 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist. I0328 09:38:42.189109 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0328 09:38:42.231574 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io W0328 09:38:42.351592 1 lease.go:224] Resetting endpoints for master service âkubernetesâ to [172.17.0.2] I0328 09:38:42.352935 1 controller.go:606] quota admission added evaluator for: endpoints I0328 09:38:42.357931 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io I0328 09:38:43.617131 1 controller.go:606] quota admission added evaluator for: serviceaccounts I0328 09:38:43.632632 1 controller.go:606] quota admission added evaluator for: deployments.apps I0328 09:38:43.811242 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io I0328 09:38:43.853944 1 controller.go:606] quota admission added evaluator for: daemonsets.apps I0328 09:39:01.274275 1 controller.go:606] quota admission added evaluator for: replicasets.apps I0328 09:39:01.557048 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
==> kube-controller-manager [b5f8ff08f2df] <== I0328 09:39:01.150476 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps I0328 09:39:01.150488 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges I0328 09:39:01.150555 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling I0328 09:39:01.150597 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io I0328 09:39:01.150639 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch I0328 09:39:01.150706 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io I0328 09:39:01.150768 1 controllermanager.go:533] Started âresourcequotaâ I0328 09:39:01.150841 1 resource_quota_controller.go:272] Starting resource quota controller I0328 09:39:01.150851 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0328 09:39:01.150865 1 resource_quota_monitor.go:303] QuotaMonitor running I0328 09:39:01.151290 1 shared_informer.go:223] Waiting for caches to sync for garbage collector I0328 09:39:01.191963 1 shared_informer.go:230] Caches are synced for ReplicationController I0328 09:39:01.196572 1 shared_informer.go:230] Caches are synced for HPA I0328 09:39:01.199685 1 shared_informer.go:230] Caches are synced for expand I0328 09:39:01.199812 1 shared_informer.go:230] Caches are synced for job I0328 09:39:01.203085 1 shared_informer.go:230] Caches are synced for stateful set I0328 09:39:01.215034 1 shared_informer.go:230] Caches are synced for namespace I0328 09:39:01.231468 1 shared_informer.go:230] Caches are synced for bootstrap_signer I0328 09:39:01.238290 1 shared_informer.go:230] Caches are synced for PVC protection I0328 09:39:01.266152 1 shared_informer.go:230] Caches are synced for PV protection I0328 09:39:01.271528 1 shared_informer.go:230] Caches are synced for deployment I0328 09:39:01.276712 1 event.go:278] Event(v1.ObjectReference{Kind:âDeploymentâ, Namespace:âkube-systemâ, Name:âcorednsâ, UID:â6123c7f4-a62d-47b1-b8cd-a3effade9814â, APIVersion:âapps/v1â, ResourceVersion:â190â, FieldPath:ââ}): type: âNormalâ reason: âScalingReplicaSetâ Scaled up replica set coredns-66bff467f8 to 2 I0328 09:39:01.296555 1 shared_informer.go:230] Caches are synced for service account I0328 09:39:01.307552 1 shared_informer.go:230] Caches are synced for ReplicaSet I0328 09:39:01.315671 1 event.go:278] Event(v1.ObjectReference{Kind:âReplicaSetâ, Namespace:âkube-systemâ, Name:âcoredns-66bff467f8â, UID:â10138dac-8a85-4fbb-bd97-5c96bcc1d5f4â, APIVersion:âapps/v1â, ResourceVersion:â359â, FieldPath:ââ}): type: âNormalâ reason: âSuccessfulCreateâ Created pod: coredns-66bff467f8-zs5ms I0328 09:39:01.327367 1 event.go:278] Event(v1.ObjectReference{Kind:âReplicaSetâ, Namespace:âkube-systemâ, Name:âcoredns-66bff467f8â, UID:â10138dac-8a85-4fbb-bd97-5c96bcc1d5f4â, APIVersion:âapps/v1â, ResourceVersion:â359â, FieldPath:ââ}): type: âNormalâ reason: âSuccessfulCreateâ Created pod: coredns-66bff467f8-txjxd W0328 09:39:01.504571 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName=âminikubeâ does not exist I0328 09:39:01.506102 1 shared_informer.go:230] Caches are synced for persistent volume I0328 09:39:01.512313 1 shared_informer.go:230] Caches are synced for attach detach I0328 09:39:01.521994 1 shared_informer.go:230] Caches are synced for GC I0328 09:39:01.546440 1 shared_informer.go:230] Caches are synced for daemon sets I0328 09:39:01.547305 1 shared_informer.go:230] Caches are synced for taint I0328 09:39:01.547346 1 taint_manager.go:187] Starting NoExecuteTaintManager I0328 09:39:01.547372 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: W0328 09:39:01.547585 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp. I0328 09:39:01.547636 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal. I0328 09:39:01.547664 1 event.go:278] Event(v1.ObjectReference{Kind:âNodeâ, Namespace:ââ, Name:âminikubeâ, UID:â2671bd9f-a9f6-40b1-b197-00a5c4bc9afaâ, APIVersion:âv1â, ResourceVersion:ââ, FieldPath:ââ}): type: âNormalâ reason: âRegisteredNodeâ Node minikube event: Registered Node minikube in Controller I0328 09:39:01.573408 1 event.go:278] Event(v1.ObjectReference{Kind:âDaemonSetâ, Namespace:âkube-systemâ, Name:âkindnetâ, UID:âa6e8ed72-f57f-4d84-ae0d-9bb4902e06c1â, APIVersion:âapps/v1â, ResourceVersion:â216â, FieldPath:ââ}): type: âNormalâ reason: âSuccessfulCreateâ Created pod: kindnet-zbng4 I0328 09:39:01.595325 1 shared_informer.go:230] Caches are synced for node I0328 09:39:01.595362 1 range_allocator.go:172] Starting range CIDR allocator I0328 09:39:01.595368 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator I0328 09:39:01.595374 1 shared_informer.go:230] Caches are synced for cidrallocator I0328 09:39:01.599304 1 shared_informer.go:230] Caches are synced for certificate-csrapproving I0328 09:39:01.612334 1 event.go:278] Event(v1.ObjectReference{Kind:âDaemonSetâ, Namespace:âkube-systemâ, Name:âkube-proxyâ, UID:â12557541-b7da-4b49-aca4-116336c7452aâ, APIVersion:âapps/v1â, ResourceVersion:â200â, FieldPath:ââ}): type: âNormalâ reason: âSuccessfulCreateâ Created pod: kube-proxy-t5bhd I0328 09:39:01.612403 1 shared_informer.go:230] Caches are synced for TTL I0328 09:39:01.634595 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24] I0328 09:39:01.703091 1 shared_informer.go:230] Caches are synced for certificate-csrsigning I0328 09:39:01.704942 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator I0328 09:39:01.795124 1 shared_informer.go:230] Caches are synced for disruption I0328 09:39:01.795180 1 disruption.go:339] Sending events to api server. E0328 09:39:01.801453 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:ââ, APIVersion:ââ}, ObjectMeta:v1.ObjectMeta{Name:âkube-proxyâ, GenerateName:ââ, Namespace:âkube-systemâ, SelfLink:â/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxyâ, UID:â12557541-b7da-4b49-aca4-116336c7452aâ, ResourceVersion:â200â, Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720985123, loc:(*time.Location)(0x6d021e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{âk8s-appâ:âkube-proxyâ}, Annotations:map[string]string{âdeprecated.daemonset.template.generationâ:â1â}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:ââ, ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:âkubeadmâ, Operation:âUpdateâ, APIVersion:âapps/v1â, Time:(*v1.Time)(0xc00012d660), FieldsType:âFieldsV1â, FieldsV1:(*v1.FieldsV1)(0xc00012d680)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00012d6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:ââ, GenerateName:ââ, Namespace:ââ, SelfLink:ââ, UID:ââ, ResourceVersion:ââ, Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{âk8s-appâ:âkube-proxyâ}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:ââ, ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:âkube-proxyâ, VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00092fb80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:âxtables-lockâ, VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00012d6e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:âlib-modulesâ, VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00012d720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:âkube-proxyâ, Image:âk8s.gcr.io/kube-proxy:v1.18.0â, Command:[]string{â/usr/local/bin/kube-proxyâ, ââconfig=/var/lib/kube-proxy/config.confâ, ââhostname-override=$(NODE_NAME)â}, Args:[]string(nil), WorkingDir:ââ, Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:âNODE_NAMEâ, Value:ââ, ValueFrom:(*v1.EnvVarSource)(0xc00012d7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:âkube-proxyâ, ReadOnly:false, MountPath:â/var/lib/kube-proxyâ, SubPath:ââ, MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:ââ}, v1.VolumeMount{Name:âxtables-lockâ, ReadOnly:false, MountPath:â/run/xtables.lockâ, SubPath:ââ, MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:ââ}, v1.VolumeMount{Name:âlib-modulesâ, ReadOnly:true, MountPath:â/lib/modulesâ, SubPath:ââ, MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:ââ}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:â/dev/termination-logâ, TerminationMessagePolicy:âFileâ, ImagePullPolicy:âIfNotPresentâ, SecurityContext:(*v1.SecurityContext)(0xc000d82f50), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:âAlwaysâ, TerminationGracePeriodSeconds:(*int64)(0xc0002c7608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:âClusterFirstâ, NodeSelector:map[string]string{âkubernetes.io/osâ:âlinuxâ}, ServiceAccountName:âkube-proxyâ, DeprecatedServiceAccount:âkube-proxyâ, AutomountServiceAccountToken:(*bool)(nil), NodeName:ââ, HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000380fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:ââ, Subdomain:ââ, Affinity:(*v1.Affinity)(nil), SchedulerName:âdefault-schedulerâ, Tolerations:[]v1.Toleration{v1.Toleration{Key:âCriticalAddonsOnlyâ, Operator:âExistsâ, Value:ââ, Effect:ââ, TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:ââ, Operator:âExistsâ, Value:ââ, Effect:ââ, TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:âsystem-node-criticalâ, Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:âRollingUpdateâ, RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000130de8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0002c7688)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps âkube-proxyâ: the object has been modified; please apply your changes to the latest version and try again I0328 09:39:01.845728 1 shared_informer.go:230] Caches are synced for endpoint_slice I0328 09:39:01.846387 1 shared_informer.go:230] Caches are synced for endpoint I0328 09:39:01.851925 1 shared_informer.go:230] Caches are synced for garbage collector I0328 09:39:01.893238 1 shared_informer.go:230] Caches are synced for resource quota I0328 09:39:01.893785 1 shared_informer.go:230] Caches are synced for garbage collector I0328 09:39:01.893819 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0328 09:39:02.298773 1 request.go:621] Throttling request took 1.049085992s, request: GET:https://172.17.0.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s I0328 09:39:02.900126 1 shared_informer.go:223] Waiting for caches to sync for resource quota I0328 09:39:02.900160 1 shared_informer.go:230] Caches are synced for resource quota
==> kube-proxy [2586b1911767] <== W0328 09:39:03.448258 1 server_others.go:559] Unknown proxy mode ââ, assuming iptables proxy I0328 09:39:03.503576 1 node.go:136] Successfully retrieved node IP: 172.17.0.2 I0328 09:39:03.503631 1 server_others.go:186] Using iptables Proxier. I0328 09:39:03.504727 1 server.go:583] Version: v1.18.0 I0328 09:39:03.505230 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I0328 09:39:03.505319 1 conntrack.go:100] Set sysctl ânet/netfilter/nf_conntrack_tcp_timeout_establishedâ to 86400 I0328 09:39:03.505370 1 conntrack.go:100] Set sysctl ânet/netfilter/nf_conntrack_tcp_timeout_close_waitâ to 3600 I0328 09:39:03.513398 1 config.go:133] Starting endpoints config controller I0328 09:39:03.513417 1 shared_informer.go:223] Waiting for caches to sync for endpoints config I0328 09:39:03.514926 1 config.go:315] Starting service config controller I0328 09:39:03.514943 1 shared_informer.go:223] Waiting for caches to sync for service config I0328 09:39:03.614774 1 shared_informer.go:230] Caches are synced for endpoints config I0328 09:39:03.615772 1 shared_informer.go:230] Caches are synced for service config
==> kube-scheduler [ed551651020b] <== I0328 09:38:36.243873 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0328 09:38:36.244008 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0328 09:38:37.231503 1 serving.go:313] Generated self-signed cert in-memory W0328 09:38:40.828334 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by âkubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SAâ W0328 09:38:40.828417 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps âextension-apiserver-authenticationâ is forbidden: User âsystem:kube-schedulerâ cannot get resource âconfigmapsâ in API group ââ in the namespace âkube-systemâ W0328 09:38:40.828635 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous. W0328 09:38:40.828696 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0328 09:38:40.845392 1 registry.go:150] Registering EvenPodsSpread predicate and priority function I0328 09:38:40.845895 1 registry.go:150] Registering EvenPodsSpread predicate and priority function W0328 09:38:40.894162 1 authorization.go:47] Authorization is disabled W0328 09:38:40.894676 1 authentication.go:40] Authentication is disabled I0328 09:38:40.894756 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251 I0328 09:38:40.897686 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259 I0328 09:38:40.898054 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0328 09:38:40.898235 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0328 09:38:40.898395 1 tlsconfig.go:240] Starting DynamicServingCertificateController E0328 09:38:40.903863 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User âsystem:kube-schedulerâ cannot list resource âcsinodesâ in API group âstorage.k8s.ioâ at the cluster scope E0328 09:38:40.904204 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User âsystem:kube-schedulerâ cannot list resource ânodesâ in API group ââ at the cluster scope E0328 09:38:40.907931 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps âextension-apiserver-authenticationâ is forbidden: User âsystem:kube-schedulerâ cannot list resource âconfigmapsâ in API group ââ in the namespace âkube-systemâ E0328 09:38:40.911011 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps âextension-apiserver-authenticationâ is forbidden: User âsystem:kube-schedulerâ cannot list resource âconfigmapsâ in API group ââ in the namespace âkube-systemâ E0328 09:38:40.911580 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User âsystem:kube-schedulerâ cannot list resource âpodsâ in API group ââ at the cluster scope E0328 09:38:40.911702 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User âsystem:kube-schedulerâ cannot list resource âstorageclassesâ in API group âstorage.k8s.ioâ at the cluster scope E0328 09:38:40.911592 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User âsystem:kube-schedulerâ cannot list resource âcsinodesâ in API group âstorage.k8s.ioâ at the cluster scope E0328 09:38:40.912164 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User âsystem:kube-schedulerâ cannot list resource âpoddisruptionbudgetsâ in API group âpolicyâ at the cluster scope E0328 09:38:40.914913 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User âsystem:kube-schedulerâ cannot list resource âpersistentvolumeclaimsâ in API group ââ at the cluster scope E0328 09:38:40.915280 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User âsystem:kube-schedulerâ cannot list resource âservicesâ in API group ââ at the cluster scope E0328 09:38:40.915816 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User âsystem:kube-schedulerâ cannot list resource âpersistentvolumesâ in API group ââ at the cluster scope E0328 09:38:40.916147 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User âsystem:kube-schedulerâ cannot list resource âpodsâ in API group ââ at the cluster scope E0328 09:38:40.916721 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User âsystem:kube-schedulerâ cannot list resource ânodesâ in API group ââ at the cluster scope E0328 09:38:40.917885 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User âsystem:kube-schedulerâ cannot list resource âpersistentvolumeclaimsâ in API group ââ at the cluster scope E0328 09:38:40.922104 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User âsystem:kube-schedulerâ cannot list resource âservicesâ in API group ââ at the cluster scope E0328 09:38:40.925482 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User âsystem:kube-schedulerâ cannot list resource âpoddisruptionbudgetsâ in API group âpolicyâ at the cluster scope E0328 09:38:40.928720 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User âsystem:kube-schedulerâ cannot list resource âstorageclassesâ in API group âstorage.k8s.ioâ at the cluster scope E0328 09:38:40.929954 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User âsystem:kube-schedulerâ cannot list resource âpersistentvolumesâ in API group ââ at the cluster scope I0328 09:38:42.898728 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0328 09:38:43.801875 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler⊠I0328 09:38:43.821643 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
==> kubelet <== â Logs begin at Sat 2020-03-28 09:38:06 UTC, end at Sat 2020-03-28 09:41:19 UTC. â Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.430356 2272 kubelet_node_status.go:73] Successfully registered node minikube Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.529587 2272 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-03-28 09:38:44.529562942 +0000 UTC m=+0.925323260 LastTransitionTime:2020-03-28 09:38:44.529562942 +0000 UTC m=+0.925323260 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} Mar 28 09:38:44 minikube kubelet[2272]: E0328 09:38:44.555198 2272 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697420 2272 cpu_manager.go:184] [cpumanager] starting with none policy Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697465 2272 cpu_manager.go:185] [cpumanager] reconciling every 10s Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697491 2272 state_mem.go:36] [cpumanager] initializing new in-memory state store Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697718 2272 state_mem.go:88] [cpumanager] updated default cpuset: ââ Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697728 2272 state_mem.go:96] [cpumanager] updated cpuset assignments: âmap[]â Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697738 2272 policy_none.go:43] [cpumanager] none policy: Start Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.700460 2272 plugin_manager.go:114] Starting Kubelet Plugin Manager Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.955720 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.958117 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.960885 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.962897 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.997813 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âusr-local-share-ca-certificatesâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-usr-local-share-ca-certificatesâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998150 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âca-certsâ (UniqueName: âkubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-ca-certsâ) pod âkube-apiserver-minikubeâ (UID: â45e2432c538c36239dfecde67cb91065â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998373 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âetc-ca-certificatesâ (UniqueName: âkubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-etc-ca-certificatesâ) pod âkube-apiserver-minikubeâ (UID: â45e2432c538c36239dfecde67cb91065â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998649 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âca-certsâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-ca-certsâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998813 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âflexvolume-dirâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-flexvolume-dirâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998973 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âusr-share-ca-certificatesâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-usr-share-ca-certificatesâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999215 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âetcd-dataâ (UniqueName: âkubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-dataâ) pod âetcd-minikubeâ (UID: âca02679f24a416493e1c288b16539a55â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999363 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âk8s-certsâ (UniqueName: âkubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-k8s-certsâ) pod âkube-apiserver-minikubeâ (UID: â45e2432c538c36239dfecde67cb91065â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999570 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âusr-local-share-ca-certificatesâ (UniqueName: âkubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-usr-local-share-ca-certificatesâ) pod âkube-apiserver-minikubeâ (UID: â45e2432c538c36239dfecde67cb91065â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999727 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âetcd-certsâ (UniqueName: âkubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-certsâ) pod âetcd-minikubeâ (UID: âca02679f24a416493e1c288b16539a55â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999812 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âkubeconfigâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-kubeconfigâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999844 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âkubeconfigâ (UniqueName: âkubernetes.io/host-path/5795d0c442cb997ff93c49feeb9f6386-kubeconfigâ) pod âkube-scheduler-minikubeâ (UID: â5795d0c442cb997ff93c49feeb9f6386â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999874 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âusr-share-ca-certificatesâ (UniqueName: âkubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-usr-share-ca-certificatesâ) pod âkube-apiserver-minikubeâ (UID: â45e2432c538c36239dfecde67cb91065â) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999909 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âetc-ca-certificatesâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-etc-ca-certificatesâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999935 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âk8s-certsâ (UniqueName: âkubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-k8s-certsâ) pod âkube-controller-manager-minikubeâ (UID: âc92479a2ea69d7c331c16a5105dd1b8câ) Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999950 2272 reconciler.go:157] Reconciler: start to sync state Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.597134 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:39:01 minikube kubelet[2272]: E0328 09:39:01.605430 2272 reflector.go:178] object-âkube-systemâ/âkindnet-token-w6gbcâ: Failed to list *v1.Secret: secrets âkindnet-token-w6gbcâ is forbidden: User âsystem:node:minikubeâ cannot list resource âsecretsâ in API group ââ in the namespace âkube-systemâ: no relationship found between node âminikubeâ and this object Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.641195 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.693902 2272 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.244.0.0/24 Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.694922 2272 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},} Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695410 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âcni-cfgâ (UniqueName: âkubernetes.io/host-path/df0ed0d2-969e-4c12-bb72-404a6ae006ee-cni-cfgâ) pod âkindnet-zbng4â (UID: âdf0ed0d2-969e-4c12-bb72-404a6ae006eeâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695474 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âxtables-lockâ (UniqueName: âkubernetes.io/host-path/df0ed0d2-969e-4c12-bb72-404a6ae006ee-xtables-lockâ) pod âkindnet-zbng4â (UID: âdf0ed0d2-969e-4c12-bb72-404a6ae006eeâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695568 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âxtables-lockâ (UniqueName: âkubernetes.io/host-path/b1bfa933-8399-4f53-be86-778ea4871a4a-xtables-lockâ) pod âkube-proxy-t5bhdâ (UID: âb1bfa933-8399-4f53-be86-778ea4871a4aâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695597 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âkube-proxy-token-zpx5kâ (UniqueName: âkubernetes.io/secret/b1bfa933-8399-4f53-be86-778ea4871a4a-kube-proxy-token-zpx5kâ) pod âkube-proxy-t5bhdâ (UID: âb1bfa933-8399-4f53-be86-778ea4871a4aâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695618 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âlib-modulesâ (UniqueName: âkubernetes.io/host-path/df0ed0d2-969e-4c12-bb72-404a6ae006ee-lib-modulesâ) pod âkindnet-zbng4â (UID: âdf0ed0d2-969e-4c12-bb72-404a6ae006eeâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695636 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âkindnet-token-w6gbcâ (UniqueName: âkubernetes.io/secret/df0ed0d2-969e-4c12-bb72-404a6ae006ee-kindnet-token-w6gbcâ) pod âkindnet-zbng4â (UID: âdf0ed0d2-969e-4c12-bb72-404a6ae006eeâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695718 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âlib-modulesâ (UniqueName: âkubernetes.io/host-path/b1bfa933-8399-4f53-be86-778ea4871a4a-lib-modulesâ) pod âkube-proxy-t5bhdâ (UID: âb1bfa933-8399-4f53-be86-778ea4871a4aâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695738 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âkube-proxyâ (UniqueName: âkubernetes.io/configmap/b1bfa933-8399-4f53-be86-778ea4871a4a-kube-proxyâ) pod âkube-proxy-t5bhdâ (UID: âb1bfa933-8399-4f53-be86-778ea4871a4aâ) Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.696398 2272 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24 Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.836266 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.843999 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915611 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âcoredns-token-lmhfzâ (UniqueName: âkubernetes.io/secret/106b0f2b-83c2-4405-8fef-aba1656f343e-coredns-token-lmhfzâ) pod âcoredns-66bff467f8-zs5msâ (UID: â106b0f2b-83c2-4405-8fef-aba1656f343eâ) Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915696 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âconfig-volumeâ (UniqueName: âkubernetes.io/configmap/e22968ac-8af5-4353-afcb-d00802f3155b-config-volumeâ) pod âcoredns-66bff467f8-txjxdâ (UID: âe22968ac-8af5-4353-afcb-d00802f3155bâ) Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915716 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âcoredns-token-lmhfzâ (UniqueName: âkubernetes.io/secret/e22968ac-8af5-4353-afcb-d00802f3155b-coredns-token-lmhfzâ) pod âcoredns-66bff467f8-txjxdâ (UID: âe22968ac-8af5-4353-afcb-d00802f3155bâ) Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915732 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âconfig-volumeâ (UniqueName: âkubernetes.io/configmap/106b0f2b-83c2-4405-8fef-aba1656f343e-config-volumeâ) pod âcoredns-66bff467f8-zs5msâ (UID: â106b0f2b-83c2-4405-8fef-aba1656f343eâ) Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.027343 2272 pod_container_deletor.go:77] Container â7ddc004dc5f41d3e1f1203e4e7c605885ef6e16f7d9bb604d4a9eeab84256611â not found in podâs containers Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.033256 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldnât find network status for kube-system/coredns-66bff467f8-txjxd through plugin: invalid network status for Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.118346 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldnât find network status for kube-system/coredns-66bff467f8-zs5ms through plugin: invalid network status for Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.120140 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldnât find network status for kube-system/coredns-66bff467f8-zs5ms through plugin: invalid network status for Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.121366 2272 pod_container_deletor.go:77] Container â0198de6532fe9fdd52523dbd5c64272b5a9e72adbd45424c05fad27584b74b5bâ not found in podâs containers Mar 28 09:39:06 minikube kubelet[2272]: W0328 09:39:06.127617 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldnât find network status for kube-system/coredns-66bff467f8-txjxd through plugin: invalid network status for Mar 28 09:39:06 minikube kubelet[2272]: W0328 09:39:06.131999 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldnât find network status for kube-system/coredns-66bff467f8-zs5ms through plugin: invalid network status for Mar 28 09:39:10 minikube kubelet[2272]: I0328 09:39:10.065362 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler Mar 28 09:39:10 minikube kubelet[2272]: I0328 09:39:10.240134 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âtmpâ (UniqueName: âkubernetes.io/host-path/9dfbd932-f851-468e-8db0-b89b0a244254-tmpâ) pod âstorage-provisionerâ (UID: â9dfbd932-f851-468e-8db0-b89b0a244254â) Mar 28 09:39:10 minikube kubelet[2272]: I0328 09:39:10.240563 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume âstorage-provisioner-token-4qdt2â (UniqueName: âkubernetes.io/secret/9dfbd932-f851-468e-8db0-b89b0a244254-storage-provisioner-token-4qdt2â) pod âstorage-provisionerâ (UID: â9dfbd932-f851-468e-8db0-b89b0a244254â)
==> storage-provisioner [0ef5a4adebde] <==
The operating system version:
macOS 10.14.6
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 20 (5 by maintainers)
To make things easier for Windows users, see this link for the Windows version of the patch @tstromberg mentioned.
http://storage.googleapis.com/minikube-builds/7310/minikube-windows-amd64.exe
To apply the patch, just replace your minikube.exe with the patched version linked here.
@cwansart - Sorry, I started working on a PR before I saw your reply. You could really help me by confirming whether this binary fixes your issue:
http://storage.googleapis.com/minikube-builds/7310/minikube-darwin-amd64
My apologies for the bug. I wrote the PR without considering itâs implications on Docker for macOS â and our macOS integration testing hosts are broken & require physical intervention - something not easily possible due to the quarantine.