kubeadm: Error during kubeadm init - addon phase with coreDNS

What keywords did you search in kubeadm issues before filing this one?

coredns, addons, thoubleshooting

Is this a BUG REPORT or FEATURE REQUEST?

BUG REPORT

kubeadm init --control-plane-endpoint=k8s-haproxy:6443 --cri-socket=unix:///var/run/containerd/containerd.sock --upload-certs --v=5
I0525 03:53:40.842427   22667 interface.go:432] Looking for default routes with IPv4 addresses
I0525 03:53:40.842449   22667 interface.go:437] Default route transits interface "enp1s0"
I0525 03:53:40.842550   22667 interface.go:209] Interface enp1s0 is up
I0525 03:53:40.842576   22667 interface.go:257] Interface "enp1s0" has 4 addresses :[x.x.x.75/25 2001:12f0:601:a94d:b333:5c87:3638:d40f/64 2001:12f0:601:a94d:8f00:de38:4b19:d702/64 fe80::1c9e:ab1d:2560:d583/64].
I0525 03:53:40.842587   22667 interface.go:224] Checking addr  x.x.x.75/25.
I0525 03:53:40.842592   22667 interface.go:231] IP found x.x.x.75
I0525 03:53:40.842597   22667 interface.go:263] Found valid IPv4 address x.x.x.75 for interface "enp1s0".
I0525 03:53:40.842602   22667 interface.go:443] Found active IP x.x.x.75 
I0525 03:53:40.842613   22667 kubelet.go:214] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0525 03:53:40.845410   22667 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
I0525 03:53:42.532471   22667 checks.go:570] validating Kubernetes and kubeadm version
I0525 03:53:42.532517   22667 checks.go:170] validating if the firewall is enabled and active
I0525 03:53:42.545091   22667 checks.go:205] validating availability of port 6443
I0525 03:53:42.545189   22667 checks.go:205] validating availability of port 10259
I0525 03:53:42.545204   22667 checks.go:205] validating availability of port 10257
I0525 03:53:42.545219   22667 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0525 03:53:42.545226   22667 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0525 03:53:42.545231   22667 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0525 03:53:42.545235   22667 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0525 03:53:42.545239   22667 checks.go:432] validating if the connectivity type is via proxy or direct
I0525 03:53:42.545249   22667 checks.go:471] validating http connectivity to first IP address in the CIDR
I0525 03:53:42.545258   22667 checks.go:471] validating http connectivity to first IP address in the CIDR
I0525 03:53:42.545262   22667 checks.go:106] validating the container runtime
I0525 03:53:42.551628   22667 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0525 03:53:42.551671   22667 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0525 03:53:42.551694   22667 checks.go:646] validating whether swap is enabled or not
I0525 03:53:42.551716   22667 checks.go:372] validating the presence of executable crictl
I0525 03:53:42.551734   22667 checks.go:372] validating the presence of executable conntrack
I0525 03:53:42.551742   22667 checks.go:372] validating the presence of executable ip
I0525 03:53:42.551751   22667 checks.go:372] validating the presence of executable iptables
I0525 03:53:42.551761   22667 checks.go:372] validating the presence of executable mount
I0525 03:53:42.551769   22667 checks.go:372] validating the presence of executable nsenter
I0525 03:53:42.551778   22667 checks.go:372] validating the presence of executable ebtables
I0525 03:53:42.551786   22667 checks.go:372] validating the presence of executable ethtool
I0525 03:53:42.551794   22667 checks.go:372] validating the presence of executable socat
I0525 03:53:42.551803   22667 checks.go:372] validating the presence of executable tc
I0525 03:53:42.551810   22667 checks.go:372] validating the presence of executable touch
I0525 03:53:42.551819   22667 checks.go:518] running all checks
	[WARNING SystemVerification]: missing optional cgroups: blkio
I0525 03:53:42.558121   22667 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I0525 03:53:42.558132   22667 checks.go:612] validating kubelet version
I0525 03:53:42.596489   22667 checks.go:132] validating if the "kubelet" service is enabled and active
I0525 03:53:42.605557   22667 checks.go:205] validating availability of port 10250
I0525 03:53:42.605605   22667 checks.go:205] validating availability of port 2379
I0525 03:53:42.605626   22667 checks.go:205] validating availability of port 2380
I0525 03:53:42.605647   22667 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0525 03:53:42.605749   22667 checks.go:834] using image pull policy: IfNotPresent
I0525 03:53:42.613318   22667 checks.go:843] image exists: k8s.gcr.io/kube-apiserver:v1.24.0
I0525 03:53:42.619417   22667 checks.go:843] image exists: k8s.gcr.io/kube-controller-manager:v1.24.0
I0525 03:53:42.625903   22667 checks.go:843] image exists: k8s.gcr.io/kube-scheduler:v1.24.0
I0525 03:53:42.632075   22667 checks.go:843] image exists: k8s.gcr.io/kube-proxy:v1.24.0
I0525 03:53:42.639333   22667 checks.go:843] image exists: k8s.gcr.io/pause:3.7
I0525 03:53:42.645225   22667 checks.go:843] image exists: k8s.gcr.io/etcd:3.5.3-0
I0525 03:53:42.650989   22667 checks.go:843] image exists: k8s.gcr.io/coredns/coredns:v1.8.6
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0525 03:53:42.651034   22667 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0525 03:53:42.745983   22667 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-haproxy k8s-ufmg-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 x.x.x.75]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0525 03:53:42.924577   22667 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0525 03:53:42.997336   22667 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0525 03:53:43.058714   22667 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0525 03:53:43.142842   22667 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-ufmg-master01 localhost] and IPs [x.x.x.75 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-ufmg-master01 localhost] and IPs [x.x.x.75 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0525 03:53:43.670227   22667 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0525 03:53:43.752555   22667 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0525 03:53:43.819945   22667 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0525 03:53:44.168894   22667 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0525 03:53:44.228372   22667 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0525 03:53:44.278241   22667 kubelet.go:65] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0525 03:53:44.516612   22667 manifests.go:99] [control-plane] getting StaticPodSpecs
I0525 03:53:44.516972   22667 certs.go:522] validating certificate period for CA certificate
I0525 03:53:44.517125   22667 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0525 03:53:44.517148   22667 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-apiserver"
I0525 03:53:44.517161   22667 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0525 03:53:44.517174   22667 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-apiserver"
I0525 03:53:44.517190   22667 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-apiserver"
I0525 03:53:44.521585   22667 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0525 03:53:44.521621   22667 manifests.go:99] [control-plane] getting StaticPodSpecs
I0525 03:53:44.522013   22667 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0525 03:53:44.522038   22667 manifests.go:125] [control-plane] adding volume "etc-ca-certificates" for component "kube-controller-manager"
I0525 03:53:44.522052   22667 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0525 03:53:44.522065   22667 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0525 03:53:44.522082   22667 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0525 03:53:44.522097   22667 manifests.go:125] [control-plane] adding volume "usr-local-share-ca-certificates" for component "kube-controller-manager"
I0525 03:53:44.522115   22667 manifests.go:125] [control-plane] adding volume "usr-share-ca-certificates" for component "kube-controller-manager"
I0525 03:53:44.523482   22667 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0525 03:53:44.523512   22667 manifests.go:99] [control-plane] getting StaticPodSpecs
I0525 03:53:44.523892   22667 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0525 03:53:44.524752   22667 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0525 03:53:44.525901   22667 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0525 03:53:44.525926   22667 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0525 03:53:45.529278   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:46.531165   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:47.533108   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 3 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:48.535184   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 4 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:49.536948   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:50.538124   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:51.539599   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:52.541122   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://k8s-haproxy:6443/healthz?timeout=10s
I0525 03:53:53.542833   22667 with_retry.go:241] Got a Retry-After 1s response for attempt 9 to https://k8s-haproxy:6443/healthz?timeout=10s
[apiclient] All control plane components are healthy after 13.056677 seconds
I0525 03:53:57.584614   22667 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0525 03:53:57.630862   22667 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
I0525 03:53:57.743024   22667 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node
I0525 03:53:57.743060   22667 patchnode.go:31] [patchnode] Uploading the CRI Socket information "unix:///var/run/containerd/containerd.sock" to the Node API object "k8s-ufmg-master01" as an annotation
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a1f9bc34ce92921fc6a765b6d345de3313359b5ba71a1d897d8fa89a6ae07ed7
[mark-control-plane] Marking the node k8s-ufmg-master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-ufmg-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: ecg7dx.zxkrg8zgrnf36qxp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0525 03:53:59.981195   22667 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0525 03:53:59.982340   22667 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0525 03:53:59.982931   22667 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0525 03:54:00.075030   22667 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0525 03:54:00.130802   22667 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0525 03:54:00.135259   22667 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
rpc error: code = Unknown desc = malformed header: missing HTTP content-type
unable to create a new DNS service
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createDNSService
	cmd/kubeadm/app/phases/addons/dns/dns.go:247
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.createCoreDNSAddon
	cmd/kubeadm/app/phases/addons/dns/dns.go:233
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.coreDNSAddon
	cmd/kubeadm/app/phases/addons/dns/dns.go:135
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/dns.EnsureDNSAddon
	cmd/kubeadm/app/phases/addons/dns/dns.go:94
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runCoreDNSAddon
	cmd/kubeadm/app/cmd/phases/init/addons.go:93
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1571
error execution phase addon/coredns
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
	cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
	cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
	cmd/kubeadm/app/kubeadm.go:50
main.main
	cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:250
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1571

Versions

kubeadm version (use kubeadm version): kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version): Client Version: v1.24.0 | Kustomize Version: v4.5.4

  • Cloud provider or hardware configuration: dell optiplex 3070

  • OS (e.g. from /etc/os-release): Debian GNU/Linux 11 (bullseye)

  • Kernel (e.g. uname -a):5.10.0-14-amd64 #1 SMP Debian 5.10.113-1 (2022-04-29) x86_64 GNU/Linux

  • Container runtime (CRI) (e.g. containerd, cri-o): revision="1.4.13~ds1-1~deb11u1" version="1.4.13~ds1"

  • Container networking plugin (CNI) (e.g. Calico, Cilium):

  • Others:

What happened?

During kubeadm intialization I keep receiving error

What you expected to happen?

Initialization should happen without a problem

How to reproduce it (as minimally and precisely as possible)?

In Debian Environment run initialization with the referciated tools

Anything else we need to know?

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 16 (5 by maintainers)

Most upvoted comments

I encountered same problem on Ubuntu 22.04

If ‘–skip-phases=addon/kube-proxy’ is used, it does let the install complete. Give it like 40 seconds and then run

kubeadm init phase addon kube-proxy \
  --control-plane-endpoint="<ha-controlplane-loadbalancer>:6443" \
  --pod-network-cidr="<put your cidr here>"

to install the kube-proxy addon successfully. (retry if you need to wait a few more seconds) …

On centos 9 stream I had to also copy the whole containerd default configuration, then modify the systemd line

# make a copy of the default containerd configuration
containerd config default \| sudo tee /etc/containerd/config.toml

# set to use systemd
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# adjust pause image to what's actually installed
PAUSE_IMAGE=$(kubeadm config images list \| grep pause)
sudo -E sed -i "s,sandbox_image = .*,sandbox_image = \"$PAUSE_IMAGE\",g" /etc/containerd/config.toml

# restart the containerd service
sudo systemctl enable containerd
sudo systemctl restart container

same problem here on ubuntu 22.04

The error does come from the haproxy. To resolve it, I skip the proxy initilization part by using kubeadm init --skip-phases=addon/kube-proxy

If ‘–skip-phases=addon/kube-proxy’ is used, it does let the install complete. Give it like 40 seconds and then run

kubeadm init phase addon kube-proxy \
  --control-plane-endpoint="<ha-controlplane-loadbalancer>:6443" \
  --pod-network-cidr="<put your cidr here>"

to install the kube-proxy addon successfully. (retry if you need to wait a few more seconds) …

On centos 9 stream I had to also copy the whole containerd default configuration, then modify the systemd line

# make a copy of the default containerd configuration
containerd config default \| sudo tee /etc/containerd/config.toml

# set to use systemd
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

# adjust pause image to what's actually installed
PAUSE_IMAGE=$(kubeadm config images list \| grep pause)
sudo -E sed -i "s,sandbox_image = .*,sandbox_image = \"$PAUSE_IMAGE\",g" /etc/containerd/config.toml

# restart the containerd service
sudo systemctl enable containerd
sudo systemctl restart container

It worked for me in Ubuntu 22.04 Server

Additionally I also had to clean up the Flannel CNI config files /etc/cni/net.d/*flannel* to clear previous configurations. And also clear old iptables rules.

Kubernetes: 1.24.1

I suspect it‘s a problem in the k8s-haproxy configuration.