minikube: Error downloading kic artifacts

What Happened?

steps to reproduce

  • install podman an minikube
brew install minikube && \
brew install podman && \
  • initialize an start podman machine
podman machine init --cpus 2 --memory 2048 --disk-size 20 && \
podman machine start
  • launch minikube
 minikube start --driver=podman --container-runtime=cri-o

note

see issue https://github.com/kubernetes/minikube/issues/8426 closed at the time of writing, but still active cc @medyagh

Attach the log file

I got an error running minikube logs --file=log.txt too, please see the logs below

logs
โžœ  ~ minikube start --driver=podman --container-runtime=cri-o

๐Ÿ˜„  minikube v1.26.0 on Darwin 12.5 (arm64)
โœจ  Using the podman (experimental) driver based on user configuration
๐Ÿ“Œ  Using rootless Podman driver
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
    > gcr.io/k8s-minikube/kicbase: 347.17 MiB / 347.17 MiB  100.00% 994.77 KiB
E0728 13:14:52.529962    8931 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
๐Ÿ”ฅ  Creating podman container (CPUs=2, Memory=1956MB) ...
๐ŸŽ  Preparing Kubernetes v1.24.1 on CRI-O 1.22.5 ...
โŒ  Unable to load cached images: loading cached images: CRI-O load /var/lib/minikube/images/kube-scheduler_v1.24.1: crio load image: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.1: Process exited with status 125
stdout:

stderr:
Getting image source signatures
Copying blob sha256:5306e7faf8268aaedf84b04cf4c418b33d4969bcea13e27c8717f62c13d31ddb
Copying blob sha256:798afb9dcee7e7c858b6f109d8bb3ea6d10081493703a6b77b46d388c38aa8f7
Copying blob sha256:88768122a4ad689aed8daafaa8f3a3877cd1df861c753d5456382c8635db0540
Error: payload does not match any of the supported image formats (oci, oci-archive, dir, docker-archive)

    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 42.50 MiB / 42.50 MiB [-------------] 100.00% 868.60 KiB p/s 50s
    > kubeadm: 41.38 MiB / 41.38 MiB [-----------] 100.00% 443.87 KiB p/s 1m36s
    > kubelet: 107.50 MiB / 107.50 MiB [---------] 100.00% 819.78 KiB p/s 2m14s
๐Ÿ’ข  initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.13-200.fc36.aarch64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: not set - Required for hugetlb cgroup.
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: missing
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
CGROUPS_BLKIO: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W0728 10:20:40.925851    1643 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: missing optional cgroups: hugetlb blkio
	[WARNING SystemVerification]: missing required cgroups: cpuset
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.1: output: time="2022-07-28T10:22:45Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:941012a58e853f459b4a6d213c5111d63a8ab7fe3304b674e01b68f2ff711668\": Error processing tar file(exit status 1): time=\"2022-07-28T10:22:45Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.1: output: time="2022-07-28T10:24:31Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:8f1bb484a5bdbc9f272a925739ec9c8e2531e99cbbeb50839d40dbe5d76c4525\": Error processing tar file(exit status 1): time=\"2022-07-28T10:24:31Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.1: output: time="2022-07-28T10:25:32Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:fc9023a4184c8fcc87922134bedae831ef48feb26d368413324d8c2f20d7c71a\": Error processing tar file(exit status 1): time=\"2022-07-28T10:25:31Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


๐Ÿ’ฃ  Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.13-200.fc36.aarch64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: not set - Required for hugetlb cgroup.
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: missing
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
CGROUPS_BLKIO: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W0728 10:25:32.604331    2228 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: missing optional cgroups: hugetlb blkio
	[WARNING SystemVerification]: missing required cgroups: cpuset
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.1: output: time="2022-07-28T10:27:42Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:941012a58e853f459b4a6d213c5111d63a8ab7fe3304b674e01b68f2ff711668\": Error processing tar file(exit status 1): time=\"2022-07-28T10:27:41Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.1: output: time="2022-07-28T10:29:50Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:8f1bb484a5bdbc9f272a925739ec9c8e2531e99cbbeb50839d40dbe5d76c4525\": Error processing tar file(exit status 1): time=\"2022-07-28T10:29:50Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.1: output: time="2022-07-28T10:30:59Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:fc9023a4184c8fcc87922134bedae831ef48feb26d368413324d8c2f20d7c71a\": Error processing tar file(exit status 1): time=\"2022-07-28T10:30:59Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

โŒ  Exiting due to GUEST_START: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.24.1
[preflight] Running pre-flight checks
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 5.18.13-200.fc36.aarch64
CONFIG_NAMESPACES: enabled
CONFIG_NET_NS: enabled
CONFIG_PID_NS: enabled
CONFIG_IPC_NS: enabled
CONFIG_UTS_NS: enabled
CONFIG_CGROUPS: enabled
CONFIG_CGROUP_CPUACCT: enabled
CONFIG_CGROUP_DEVICE: enabled
CONFIG_CGROUP_FREEZER: enabled
CONFIG_CGROUP_PIDS: enabled
CONFIG_CGROUP_SCHED: enabled
CONFIG_CPUSETS: enabled
CONFIG_MEMCG: enabled
CONFIG_INET: enabled
CONFIG_EXT4_FS: enabled
CONFIG_PROC_FS: enabled
CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled (as module)
CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled (as module)
CONFIG_FAIR_GROUP_SCHED: enabled
CONFIG_OVERLAY_FS: enabled (as module)
CONFIG_AUFS_FS: not set - Required for aufs.
CONFIG_BLK_DEV_DM: enabled
CONFIG_CFS_BANDWIDTH: enabled
CONFIG_CGROUP_HUGETLB: not set - Required for hugetlb cgroup.
CONFIG_SECCOMP: enabled
CONFIG_SECCOMP_FILTER: enabled
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUSET: missing
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
CGROUPS_PIDS: enabled
CGROUPS_HUGETLB: missing
CGROUPS_BLKIO: missing
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
W0728 10:25:32.604331    2228 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
	[WARNING SystemVerification]: missing optional cgroups: hugetlb blkio
	[WARNING SystemVerification]: missing required cgroups: cpuset
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.24.1: output: time="2022-07-28T10:27:42Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:941012a58e853f459b4a6d213c5111d63a8ab7fe3304b674e01b68f2ff711668\": Error processing tar file(exit status 1): time=\"2022-07-28T10:27:41Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.24.1: output: time="2022-07-28T10:29:50Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:8f1bb484a5bdbc9f272a925739ec9c8e2531e99cbbeb50839d40dbe5d76c4525\": Error processing tar file(exit status 1): time=\"2022-07-28T10:29:50Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.24.1: output: time="2022-07-28T10:30:59Z" level=fatal msg="pulling image: rpc error: code = Unknown desc = writing blob: adding layer with blob \"sha256:fc9023a4184c8fcc87922134bedae831ef48feb26d368413324d8c2f20d7c71a\": Error processing tar file(exit status 1): time=\"2022-07-28T10:30:59Z\" level=warning msg=\"Failed to decode the keys [\\\"machine\\\"] from \\\"/usr/share/containers/containers.conf\\\".\"\noperation not permitted"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

โžœ  ~ minikube logs --file=logs.txt
E0728 13:32:15.129839    9421 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"

โ—  unable to fetch logs for: describe nodes
โžœ  ~ neofetch
                    'c.          yakforward@Francescos-MacBook-Air.local
                 ,xNMM.          ---------------------------------------
               .OMMMMo           OS: macOS 12.5 21G72 arm64
               OMMM0,            Host: MacBookAir10,1
     .;loddo:' loolloddol;.      Kernel: 21.6.0
   cKMMMMMMMMMMNWMMMMMMMMMM0:    Uptime: 5 hours, 54 mins
 .KMMMMMMMMMMMMMMMMMMMMMMMWd.    Packages: 48 (brew)
 XMMMMMMMMMMMMMMMMMMMMMMMX.      Shell: zsh 5.8.1
;MMMMMMMMMMMMMMMMMMMMMMMM:       Resolution: 1440x900
:MMMMMMMMMMMMMMMMMMMMMMMM:       DE: Aqua
.MMMMMMMMMMMMMMMMMMMMMMMMX.      WM: Quartz Compositor
 kMMMMMMMMMMMMMMMMMMMMMMMMWd.    WM Theme: Blue (Dark)
 .XMMMMMMMMMMMMMMMMMMMMMMMMMMk   Terminal: iTerm2
  .XMMMMMMMMMMMMMMMMMMMMMMMMK.   Terminal Font: Monaco 12
    kMMMMMMMMMMMMMMMMMMMMMMd     CPU: Apple M1
     ;KMMMMMMMWXXWMMMMMMMk.      GPU: Apple M1
       .cooc,.    .,coo:.        Memory: 1522MiB / 8192MiB





โžœ  ~ podman --version
podman version 4.1.1

Operating System

macOS (Default)

Driver

Podman

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 7
  • Comments: 16 (1 by maintainers)

Most upvoted comments

@afbjorklund

Loading images from cache is just a bonus, it is supposed to be able to pull them from the registry otherwise.

  1. preload
  2. cache
  3. registry

Can you do something simple like podman pull k8s.gcr.io/pause:3.7 ? It seems to be failing, inside crioโ€ฆ

โžœ  ~ podman pull k8s.gcr.io/pause:3.7
Trying to pull k8s.gcr.io/pause:3.7...
Getting image source signatures
Copying blob sha256:aff472d3f83edbbc738d035ea53108fcb1e10564aaf0c8d3d6576a02cc2a5679
Copying blob sha256:aff472d3f83edbbc738d035ea53108fcb1e10564aaf0c8d3d6576a02cc2a5679
Copying config sha256:e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550
Writing manifest to image destination
Storing signatures
e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550

Actually I have no idea if it works with rootless podman, previously it was recommended to use the regular:

podman machine init --cpus 2 --memory 2048 --disk-size 20 --rootful

https://minikube.sigs.k8s.io/docs/drivers/podman/

Apparently rootless podman only works with containerd

There is like three experimental things at one time here:

  1. Podman Desktop (machine)
  2. Rootless podman (driver)
  3. Rootless cri-o (runtime)

Iโ€™m trying to narrow down the issue

I deleted an create a new podman machine, rootful this time and try to run various โ€œflavoursโ€ indicated here no luckโ€ฆ and โ€œworseโ€(?) error

rootful podman
podman machine init --cpus 2 --memory 2048 --disk-size 20 --rootful
rootful
 minikube start --driver=podman

๐Ÿ˜„  minikube v1.26.0 on Darwin 12.5 (arm64)
โœจ  Using the podman (experimental) driver based on existing profile
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
E0728 16:13:54.027788   20879 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
๐Ÿ”„  Restarting existing podman container for "minikube" ...
๐Ÿคฆ  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"

๐Ÿ”„  Restarting existing podman container for "minikube" ...
๐Ÿ˜ฟ  Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โŒ  Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f  minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
rootful with cri-o
 minikube start --driver=podman --container-runtime=cri-o

๐Ÿ˜„  minikube v1.26.0 on Darwin 12.5 (arm64)
โœจ  Using the podman (experimental) driver based on existing profile
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
E0728 16:11:08.909266   20809 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
๐Ÿ”„  Restarting existing podman container for "minikube" ...
๐Ÿคฆ  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"

๐Ÿ”„  Restarting existing podman container for "minikube" ...
๐Ÿ˜ฟ  Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โŒ  Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f  minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

rootful with containerd
โžœ  ~ minikube start --driver=podman --container-runtime=containerd

๐Ÿ˜„  minikube v1.26.0 on Darwin 12.5 (arm64)
โœจ  Using the podman (experimental) driver based on existing profile
๐Ÿ‘  Starting control plane node minikube in cluster minikube
๐Ÿšœ  Pulling base image ...
๐Ÿ’พ  Downloading Kubernetes v1.24.1 preload ...
    > preloaded-images-k8s-v18-v1...: 411.49 MiB / 411.49 MiB  100.00% 2.87 MiB
E0728 16:17:24.815749   20940 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
๐Ÿ”„  Restarting existing podman container for "minikube" ...
๐Ÿคฆ  StartHost failed, but will try again: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"

๐Ÿ”„  Restarting existing podman container for "minikube" ...
๐Ÿ˜ฟ  Failed to start podman container. Running "minikube delete" may fix it: podman inspect ip minikube: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โŒ  Exiting due to GUEST_PROVISION: Failed to start host: podman inspect ip minikube: podman container inspect -f  minikube: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                           โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                             โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                           โ”‚
โ”‚                                                                                           โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    โ”‚
โ”‚                                                                                           โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
cannot generate logs
โžœ  ~ minikube logs --file=logs.txt

โŒ  Exiting due to GUEST_STATUS: state: unknown state "minikube": podman container inspect minikube --format=: exit status 125
stdout:

stderr:
Error: inspecting object: no such container "minikube"


โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚                                                                                                                       โ”‚
โ”‚    ๐Ÿ˜ฟ  If the above advice does not help, please let us know:                                                         โ”‚
โ”‚    ๐Ÿ‘‰  https://github.com/kubernetes/minikube/issues/new/choose                                                       โ”‚
โ”‚                                                                                                                       โ”‚
โ”‚    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                โ”‚
โ”‚    Please also attach the following file to the GitHub issue:                                                         โ”‚
โ”‚    - /var/folders/9v/4dpzzrw56m1glmbj5zl6xlvm0000gn/T/minikube_logs_8f6474a291f68fa61b92987d0579232b5754d600_0.log    โ”‚
โ”‚                                                                                                                       โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ

it seems to expect the minikube container to be there already, but isnโ€™t

โžœ  ~ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES

Iโ€™ve homebrew uninstall and reinstalled the whole thing, but nothing has changed

Actually I have no idea if it works with rootless podman, previously it was recommended to use the regular:

podman system connection default podman-machine-default-root

https://minikube.sigs.k8s.io/docs/drivers/podman/

Apparently rootless podman only works with containerd