k3s: Fresh k3s v1.23.3+k3s1 airgap install, can't fetch mirrored-pause:3.6
Environmental Info: K3s Version: k3s version v1.23.3+k3s1 (5fb370e5) go version go1.17.5
Node(s) CPU architecture, OS, and Version: Linux node-a0 5.15.33-v8+ #1 SMP PREEMPT Thu Apr 14 09:47:24 CDT 2022 aarch64 GNU/Linux Custom debootstrap Debian build with custom Linux 5.15.33-v8+ on a Raspberry Pi 4
Cluster Configuration: Airgapped cluster composed of 9 Raspberry Pi 4 nodes named from node-a0 to -a9. node-a0, -a5, and -a9 are HA servers. The rest are agents. All are network booted using DHCP and ATA over Ethernet for their root storage, served from another Intel based system called diskful-aa, which also runs a Docker registry with all of the airgap images loaded. The non-k3s network is 192.168.88.0/24, with diskful-aa on 192.168.88.254.
Installation of k3s on server node-a0 has been done using the k3s install script and k3s binary stored on disk as /usr/local/bin/install.sh and /usr/local/bin/k3s, respectively, and installed like this:
export INSTALL_K3S_EXEC=server
export INSTALL_K3S_SKIP_DOWNLOAD=true
export K3S_TOKEN=xxxxxx
/usr/local/bin/install.sh --cluster-init
Nodes are not configured with public routing or public DNS resolution. After reading through a number of these kinds of bugs, I’ve come up with the following registries.yaml configuration on the server. Note that 192.168.88.254 is diskful-aa.local and contains the registry:
---
mirrors:
"docker.io":
endpoint:
- "https://192.168.88.254:5000"
"*":
endpoint:
- "https://192.168.88.254:5000"
"":
endpoint:
- "https://192.168.88.254:5000"
"192.168.88.254":
endpoint:
- "https://192.168.88.254:5000"
configs:
"192.168.88.254:5000":
tls:
insecure_skip_verify: true
Describe the bug:
After running the installation script for k3s, the binaries are setup appropriately, and the k3s kubelet is started, and I can query it using sudo kubectl get pods, but none of the pods manage to start. Instead, I see the following output from kubectl describe -n kube-system pod/coredns-5789895cd-9dbnn:
...
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreatePodSandBox 3m25s (x12071 over 43h) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "rancher/mirrored-pause:3.6": failed to pull image "rancher/mirrored-pause:3.6": failed to pull and unpack image "docker.io/rancher/mirrored-pause:3.6": failed to resolve reference "docker.io/rancher/mirrored-pause:3.6": failed to do request: Head "https://registry-1.docker.io/v2/rancher/mirrored-pause/manifests/3.6": dial tcp: lookup registry-1.docker.io on [::1]:53: read udp [::1]:44097->[::1]:53: read: connection refused
It looks like, despite the fact that I’ve configured registries.yaml to rewrite docker.io, * and "" to https://192.168.88.254:5000, containerd is still trying to fetch from registry-1.docker.io.
Steps To Reproduce:
- Installed K3s:
- Verify the system is running using
kubectl --all-namespaces get pods
Expected behavior:
- I should see all pods in the Running state.
Actual behavior:
- Pods are stuck in the ContainerCreating state.
Additional context / logs: Logs continue to spew the same event over and over, which is that registry-1.docker.io is unresolveable (which is fine – it’s not supposed to be). Happy to attach more from the logs, though I’m not sure they’ll be entirely all that useful.
Backporting
Does not need backporting – happy to upgrade to latest, if necessary.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 16 (5 by maintainers)
Yeah, as the containerd docs mention, it will always fall back to the default endpoint. Unfortunately the CRI API is bad about bubbling up multiple errors, so all you’ll see in crictl or the kubelet logs is the terminal failure when it can’t pull from the default docker.io endpoint. To see whether or not it’s failing to pull from your registry you need to look at the actual containerd logs.
At this point I would probably go look at the logs for your registry mirror and see what it has to say.