minikube: arm64 + docker driver: --container-runtime=containerd: ctr: content digest sha256:x: not found (reproducible)
On macOS (arm64), running:
minikube delete --all --purgeminikube start --driver=docker --container-runtime=containerd
with v1.18.1 results in load issues across multiple runs. In this latest run, it is with the storage-provisioner, in the other it was with coredns.
Altogether, it feels like a race condition of some sort.
📦 Preparing Kubernetes v1.20.2 on containerd 1.4.3 ...
❌ Unable to load cached images: loading cached images: containerd load /var/lib/minikube/images/storage-provisioner_v4: ctr images import: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v4: Process exited with status 1
stdout:
unpacking gcr.io/k8s-minikube/storage-provisioner:v4 (sha256:aac8e78f2bcc7fc7a1f5f9d8a84c2b8e6915da43e3a03a7bc5d25036b9452ebb)...
stderr:
ctr: content digest sha256:1c7d4f85af5d300a03d4b29ff618046447e136412853db455955545953c7ac78: not found
Ultimately, the cluster is broken:
> kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
> kubectl: 35.44 MiB / 35.44 MiB [---------------] 100.00% 15.76 MiB p/s 2s
> kubeadm: 34.50 MiB / 34.50 MiB [----------------] 100.00% 7.97 MiB p/s 4s
> kubelet: 100.57 MiB / 100.57 MiB [-------------] 100.00% 14.39 MiB p/s 7s
▪ Generating certificates and keys ...
▪ Booting up control plane ...-
💢 initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'
stderr:
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
▪ Generating certificates and keys ...
▪ Booting up control plane .../- ^R
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 21 (9 by maintainers)
Welcome to the community, but please try to keep your responses related to the actual issue and not too off-topic
LOL I’m sorry I didn’t see one of your replies above (9 days ago) and pull request… Jesus you’re weeks ahead. Thanks for the help, assistance and guidance Anders as usual. We’re lucky to have you
It used to be very fiddly and verbose, you had to give every little configuration yourself:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime
–container-runtime=cri-o --extra-config=kubelet.container-runtime=remote --extra-config=kubelet.container-runtime-endpoint=/var/run/crio/crio.sock --extra-config=kubelet.image-service-endpoint=/var/run/crio/crio.sockNow we added sane defaults, so that when you choose a runtime it changes socket too:
–container-runtime=cri-o
While it was optional for 1.14, it will become required to use --cri-socket in the future (1.23):
https://kubernetes.io/blog/2020/12/02/dockershim-faq/
This is a very interesting lecture: Initializing your control-plane node
It is for me at least. I cannot tell what would be the difference between
--container-runtimeand--cri-socketI mean I didn’t try but apparently both flags are available forminikube startso I’m confuse…About multiple runs seems to be related to the CNI… I don’t know I’m just saying… Check the Optional points in that link
About the cache there is an interesting note given that the arm64 users are encouraged to use the none driver
Confirm that using both
--preload=falseand--cache-images=falsefixes it (onarm64).Didn’t think it would be broken on the server side, especially after dropping the previous arch names…
https://github.com/kubernetes/minikube/issues/9762#issue-748032215
Looks like https://github.com/kubernetes/minikube/issues/10402
I tested same command macOS(Big Sur, 11.2.2), not reproduced. I use normal mac, not M1.
I agree this issue depends on M1 docker. https://kubernetes.slack.com/archives/C1F5CT6Q1/p1615963898008500?thread_ts=1615951102.003000&cid=C1F5CT6Q1
I think the workaround will have to be running in a VM, until Docker is out of Preview…
Since we don’t have any ISO yet, that would have to use a user-provided virtual machine.
At first I thought this might be a race condition relating to preload & running purge before-hand, but I’m also seeing similar issues simply running
minikube start --driver=docker --container-runtime=containerd:(Forgive me if this is a dupe issue)