minikube: Failed to start minikube - failed to acquire bootstrap client lock: bad file descriptor
Steps to reproduce the issue:
- minikube start --driver=docker --cpus=2 --memory=8g --addons=ingress
$ minikube start --driver=docker --cpus=2 --memory=8g --addons=ingress --alsologtostderr I0408 07:05:54.554411 4173 out.go:239] Setting OutFile to fd 1 … I0408 07:05:54.554717 4173 out.go:286] TERM=vt100,COLORTERM=, which probably does not support color I0408 07:05:54.554732 4173 out.go:252] Setting ErrFile to fd 2… I0408 07:05:54.554739 4173 out.go:286] TERM=vt100,COLORTERM=, which probably does not support color I0408 07:05:54.554900 4173 root.go:308] Updating PATH: /rhome/dadmmason/.minikube/bin W0408 07:05:54.555235 4173 root.go:283] Error reading config file at /rhome/dadmmason/.minikube/config/config.json: open /rhome/dadmmason/.minikube/config/config.json: no such file or directory I0408 07:05:54.568864 4173 out.go:246] Setting JSON to false I0408 07:05:54.570570 4173 start.go:108] hostinfo: {“hostname”:“ohdlawx0001.dev.mig.corp”,“uptime”:1056,“bootTime”:1617878898,“procs”:247,“os”:“linux”,“platform”:“redhat”,“platformFamily”:“rhel”,“platformVersion”:“8.3”,“kernelVersion”:“4.18.0-240.15.1.el8_3.x86_64”,“kernelArch”:“x86_64”,“virtualizationSystem”:“”,“virtualizationRole”:“”,“hostId”:“cee83a86-c71e-4c49-8bbe-e9a356a31865”} I0408 07:05:54.570639 4173 start.go:118] virtualization: I0408 07:05:54.574648 4173 out.go:129] * minikube v1.18.1 on Redhat 8.3
- minikube v1.18.1 on Redhat 8.3 I0408 07:05:54.575046 4173 notify.go:126] Checking for updates… I0408 07:05:54.575230 4173 driver.go:323] Setting default libvirt URI to qemu:///system I0408 07:05:54.631667 4173 docker.go:118] docker version: linux-20.10.5 I0408 07:05:54.631730 4173 cli_runner.go:115] Run: docker system info --format “{{json .}}” I0408 07:05:54.720395 4173 info.go:253] docker info: {ID:FBFI:BBUD:H4M2:Y6U4:SY7R:GT7R:RLC5:7733:BL4B:VBBP:LENX:NONN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2021-04-08 07:05:54.667907792 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-240.15.1.el8_3.x86_64 OperatingSystem:Red Hat Enterprise Linux 8.3 (Ootpa) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:12347408384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ohdlawx0001.dev.mig.corp Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}} I0408 07:05:54.720485 4173 docker.go:215] overlay module found I0408 07:05:54.725267 4173 out.go:129] * Using the docker driver based on user configuration
- Using the docker driver based on user configuration
I0408 07:05:54.725292 4173 start.go:276] selected driver: docker
I0408 07:05:54.725302 4173 start.go:718] validating driver “docker” against <nil>
I0408 07:05:54.725329 4173 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
W0408 07:05:54.725449 4173 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
W0408 07:05:54.725501 4173 out.go:191] ! Your cgroup does not allow setting memory.
! Your cgroup does not allow setting memory.
I0408 07:05:54.728750 4173 out.go:129] - More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
- More information: https://docs.doInfo.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities I0408 07:05:54.729049 4173 cli_runner.go:115] Run: docker system info --format “{{json .}}” I0408 07:05:54.806350 4173 info.go:253] docker info: {ID:FBFI:BBUD:H4M2:Y6U4:SY7R:GT7R:RLC5:7733:BL4B:VBBP:LENX:NONN Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:33 SystemTime:2021-04-08 07:05:54.763151806 -0400 EDT LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.18.0-240.15.1.el8_3.x86_64 OperatingSystem:Red Hat Enterprise Linux 8.3 (Ootpa) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:12347408384 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ohdlawx0001.dev.mig.corp Labels:[] ExperimentalBuild:false ServerVersion:20.10.5 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05f951a3781f4f2c1911b05e61c160e9c30eaa8e Expected:05f951a3781f4f2c1911b05e61c160e9c30eaa8e} RuncCommit:{ID:12644e614e25b05da6fd08a38ffa0cfe1903fdec Expected:12644e614e25b05da6fd08a38ffa0cfe1903fdec} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.5.1-docker]] Warnings:<nil>}} I0408 07:05:54.806476 4173 start_flags.go:251] no existing cluster config was found, will generate one from the flags I0408 07:05:54.806598 4173 start_flags.go:696] Wait components to verify : map[apiserver:true system_pods:true] I0408 07:05:54.806625 4173 cni.go:74] Creating CNI manager for “” I0408 07:05:54.806635 4173 cni.go:140] CNI unnecessary in this configuration, recommending no CNI I0408 07:05:54.806644 4173 start_flags.go:395] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} I0408 07:05:54.810463 4173 out.go:129] * Starting control plane node minikube in cluster minikube
- Starting control plane node minikube in cluster minikube I0408 07:05:54.846194 4173 cache.go:120] Beginning downloading kic base image for docker with docker I0408 07:05:54.850304 4173 out.go:129] * Pulling base image …
- Pulling base image … I0408 07:05:54.850352 4173 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0408 07:05:54.850632 4173 cache.go:145] Downloading gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon I0408 07:05:54.850672 4173 image.go:140] Writing gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e to local daemon I0408 07:05:54.892802 4173 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 I0408 07:05:54.892821 4173 cache.go:54] Caching tarball of preloaded images I0408 07:05:54.892852 4173 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0408 07:05:54.931980 4173 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 I0408 07:05:54.934984 4173 out.go:129] * Downloading Kubernetes v1.20.2 preload …
- Downloading Kubernetes v1.20.2 preload …
I0408 07:05:54.935351 4173 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 -> /rhome/dadmmason/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4
preloaded-images-k8s-v9-v1…: 491.22 MiB / 491.22 MiB 100.00% 35.61 Mi I0408 07:06:10.033058 4173 preload.go:160] saving checksum for preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 … I0408 07:06:10.190864 4173 preload.go:177] verifying checksumm of /rhome/dadmmason/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 … I0408 07:06:11.981545 4173 cache.go:57] Finished verifying existence of preloaded tar for v1.20.2 on docker I0408 07:06:11.981946 4173 profile.go:148] Saving config to /rhome/dadmmason/.minikube/profiles/minikube/config.json … I0408 07:06:11.981985 4173 lock.go:36] WriteFile acquiring /rhome/dadmmason/.minikube/profiles/minikube/config.json: {Name:mkdd6468410fe3fb8a81afb70f8741815dfa701f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>} I0408 07:06:15.101380 4173 cache.go:148] successfully downloaded gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e I0408 07:06:15.101422 4173 cache.go:185] Successfully downloaded all kic artifacts I0408 07:06:15.101471 4173 start.go:313] acquiring machines lock for minikube: {Name:mke106008088022af601d1ad8a563b2b2afd8f7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>} I0408 07:06:15.101687 4173 start.go:317] acquired machines lock for “minikube” in 196.097µs I0408 07:06:15.102185 4173 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0408 07:06:15.102283 4173 start.go:126] createHost starting for “” (driver=“docker”) I0408 07:06:15.109615 4173 out.go:150] * Creating docker container (CPUs=2, Memory=8192MB) …
- Creating docker container (CPUs=2, Memory=8192MB) …| I0408 07:06:15.109852 4173 start.go:160] libmachine.API.Create for “minikube” (driver=“docker”) I0408 07:06:15.109892 4173 client.go:168] LocalClient.Create starting I0408 07:06:15.110307 4173 client.go:171] LocalClient.Create took 405.321µs \ I0408 07:06:17.111158 4173 ssh_runner.go:149] Run: sh -c “df -h /var | awk ‘NR==2{print $5}’” I0408 07:06:17.111324 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube | W0408 07:06:17.160619 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:17.160767 4173 retry.go:31] will retry after 276.165072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube \ I0408 07:06:17.437396 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube W0408 07:06:17.481955 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:17.482064 4173 retry.go:31] will retry after 540.190908ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube / I0408 07:06:18.022550 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube W0408 07:06:18.065523 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:18.065624 4173 retry.go:31] will retry after 655.06503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube \ I0408 07:06:18.721180 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube | W0408 07:06:18.764668 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:18.764767 4173 retry.go:31] will retry after 791.196345ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube | I0408 07:06:19.556677 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube W0408 07:06:19.600683 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 W0408 07:06:19.600788 4173 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube
W0408 07:06:19.600808 4173 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0408 07:06:19.600820 4173 start.go:129] duration metric: createHost completed in 4.498526015s I0408 07:06:19.600828 4173 start.go:80] releasing machines lock for “minikube”, held for 4.499126077s W0408 07:06:19.600850 4173 start.go:425] error starting host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor I0408 07:06:19.600913 4173 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}} / W0408 07:06:19.644039 4173 cli_runner.go:162] docker container inspect minikube --format={{.State.Status}} returned with exit code 1 I0408 07:06:19.644097 4173 delete.go:46] couldn’t inspect container “minikube” before deleting: unknown state “minikube”: docker container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: Error: No such container: minikube I0408 07:06:19.644300 4173 cli_runner.go:115] Run: sudo -n podman container inspect minikube --format={{.State.Status}} W0408 07:06:19.666438 4173 cli_runner.go:162] sudo -n podman container inspect minikube --format={{.State.Status}} returned with exit code 1 I0408 07:06:19.666501 4173 delete.go:46] couldn’t inspect container “minikube” before deleting: unknown state “minikube”: sudo -n podman container inspect minikube --format={{.State.Status}}: exit status 1 stdout:
stderr: sudo: podman: command not found W0408 07:06:19.666541 4173 start.go:430] delete host: Docker machine “minikube” does not exist. Use “docker-machine ls” to list machines. Use “docker-machine create” to add a new one. W0408 07:06:19.666678 4173 out.go:191] ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor ! StartHost failed, but will try again: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor I0408 07:06:19.666713 4173 start.go:440] Will try again in 5 seconds … I0408 07:06:24.667885 4173 start.go:313] acquiring machines lock for minikube: {Name:mke106008088022af601d1ad8a563b2b2afd8f7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>} I0408 07:06:24.668501 4173 start.go:317] acquired machines lock for “minikube” in 545.261µs I0408 07:06:24.668553 4173 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:8192 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true} I0408 07:06:24.668681 4173 start.go:126] createHost starting for “” (driver=“docker”) I0408 07:06:24.673516 4173 out.go:150] * Creating docker container (CPUs=2, Memory=8192MB) …
- Creating docker container (CPUs=2, Memory=8192MB) …I0408 07:06:24.673668 4173 start.go:160] libmachine.API.Create for “minikube” (driver=“docker”) I0408 07:06:24.673701 4173 client.go:168] LocalClient.Create starting | I0408 07:06:24.673831 4173 client.go:171] LocalClient.Create took 118.461µs \ I0408 07:06:26.674348 4173 ssh_runner.go:149] Run: sh -c “df -h /var | awk ‘NR==2{print $5}’” I0408 07:06:26.674416 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube | W0408 07:06:26.720037 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:26.720199 4173 retry.go:31] will retry after 231.159374ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube - I0408 07:06:26.952146 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube \ W0408 07:06:27.007420 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:27.007517 4173 retry.go:31] will retry after 445.058653ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube \ I0408 07:06:27.453344 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube | W0408 07:06:27.498271 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:27.498371 4173 retry.go:31] will retry after 318.170823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube \ I0408 07:06:27.816973 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube W0408 07:06:27.861005 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:27.861106 4173 retry.go:31] will retry after 553.938121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube / I0408 07:06:28.415952 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube W0408 07:06:28.459541 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 I0408 07:06:28.459655 4173 retry.go:31] will retry after 755.539547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube / I0408 07:06:29.216213 4173 cli_runner.go:115] Run: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube W0408 07:06:29.276618 4173 cli_runner.go:162] docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube returned with exit code 1 W0408 07:06:29.276727 4173 start.go:257] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube
W0408 07:06:29.276743 4173 start.go:239] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for “minikube”: docker container inspect -f “‘{{(index (index .NetworkSettings.Ports “22/tcp”) 0).HostPort}}’” minikube: exit status 1 stdout:
stderr: Error: No such container: minikube I0408 07:06:29.276755 4173 start.go:129] duration metric: createHost completed in 4.60806132s I0408 07:06:29.276762 4173 start.go:80] releasing machines lock for “minikube”, held for 4.608237876s W0408 07:06:29.276912 4173 out.go:191] * Failed to start docker container. Running “minikube delete” may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor
- Failed to start docker container. Running “minikube delete” may fix it: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor I0408 07:06:29.283409 4173 out.go:129]
W0408 07:06:29.283628 4173 out.go:191] X Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor X Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: bootstrapping certificates: failed to acquire bootstrap client lock: %!v(MISSING) bad file descriptor W0408 07:06:29.283689 4173 out.go:191] * * W0408 07:06:29.283730 4173 out.go:191] * If the above advice does not help, please let us know:
- If the above advice does not help, please let us know:
W0408 07:06:29.283775 4173 out.go:191] - https://github.com/kubernetes/minikube/issues/new/choose
- https://github.com/kubernetes/minikube/issues/new/choose I0408 07:06:29.288454 4173 out.go:129]
Full output of failed command:
Full output of minikube start command used, if not already included:
Optional: Full output of minikube logs command:
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 15 (3 by maintainers)
Yes, I think my NFS homedir caused this in my case. Setting MINIKUBE_HOME outside my homedir fixed it. (https://minikube.sigs.k8s.io/docs/handbook/config/#environment-variables)
@craustin @defurn is there a way that miniukube could detect NFS home dir so at least we could relax the Lock for NFS ? Or suggest the user to change homedir ?
Thank you! This was exactly the issue that I was having and I could find anywhere how to fix it.