kubernetes: kubeadm init fails to create ConfigMap when --upload certs is set

What happened?

Kubeadm init fails to create ConfigMap of certs it generated. Reopening since https://github.com/kubernetes/kubernetes/issues/112378 was closed incorrectly.

What did you expect to happen?

Kubeadm to create the ConfigMap with certs.

How can we reproduce it (as minimally and precisely as possible)?

Tested with v1.24.4+:

kubeadm init --upload-certs

Anything else we need to know?

I am trying to set this up in a multimaster environment behind a keepalived load balancer and have been running into issues. I stripped everything away and have just tried spinning up a cluster by itself. Seems to work fine unless --upload-certs is set. The load balancer was NOT used when running these commands as can be seen by the line:

[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.X.Y.Z]

I tested out the same command with version 1.25.0 and got the following error:

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
error execution phase addon/kube-proxy: error when creating kube-proxy service account: unable to create serviceaccount: Post "https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/serviceaccounts?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
To see the stack trace of this error execute with --v=5 or higher

However, when downgrading to 1.23.4 the command works fine:

$ sudo kubeadm init --kubernetes-version=1.23.4 --upload-certs
[init] Using Kubernetes version: v1.23.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.X.Y.Z]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [10.X.Y.Z 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [10.X.Y.Z 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.004669 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
d253c271b6ec45429f36ab9f7a0a2400ab40c5c0d111dc989c9d6f8e01147df9
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: hykbf0.1wt5m3x163ikkzbn
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

I’m switching versions by running:

sudo apt install kubelet=1.23.4-00 kubeadm=1.23.4-00

And specifying the version in init like:

sudo kubeadm init --kubernetes-version=1.23.4 --upload-certs

Logs when running command on v1.24.4:

$ kubeadm init --upload-certs
I0911 10:21:51.765077   60419 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.24
[init] Using Kubernetes version: v1.24.4
[preflight] Running pre-flight checks
        [WARNING SystemVerification]: missing optional cgroups: blkio
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.X.Y.Z]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [10.X.Y.Z 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [10.X.Y.Z 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.501712 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
c819cd316968e49d8e72e82b892209b6642ffa82b78a2acc2b044f8374d1bc0b
[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 52i42g.a6su9cgidh3e3p2m
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
error execution phase addon/kube-proxy: unable to create ConfigMap: Post "https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s": dial tcp 10.X.Y.Z:6443: connect: connection refused
To see the stack trace of this error execute with --v=5 or higher

Failure section with --v=5 specified:

I0911 10:48:07.714160   66309 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I0911 10:48:07.715523   66309 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0911 10:48:07.716218   66309 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0911 10:48:07.722045   66309 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I0911 10:48:07.731558   66309 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I0911 10:48:07.733598   66309 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I0911 10:48:08.510802   66309 request.go:533] Waited for 196.126523ms due to client-side throttling, not priority and fairness, request: POST:https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:09.511823   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 1 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:10.512721   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 2 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:11.514136   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 3 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:12.515948   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 4 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:13.517474   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 5 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:14.518850   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 6 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:15.520361   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 7 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:16.521647   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 8 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
I0911 10:48:17.523964   66309 with_retry.go:241] Got a Retry-After 1s response for attempt 9 to https://10.X.Y.Z:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s
client rate limiter Wait returned an error: context deadline exceeded
unable to create ConfigMap
k8s.io/kubernetes/cmd/kubeadm/app/util/apiclient.CreateOrUpdateConfigMap
        cmd/kubeadm/app/util/apiclient/idempotency.go:48
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy.createKubeProxyConfigMap
        cmd/kubeadm/app/phases/addons/proxy/proxy.go:131
k8s.io/kubernetes/cmd/kubeadm/app/phases/addons/proxy.EnsureProxyAddon
        cmd/kubeadm/app/phases/addons/proxy/proxy.go:55
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runKubeProxyAddon
        cmd/kubeadm/app/cmd/phases/init/addons.go:102
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1571
error execution phase addon/kube-proxy
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
        cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
        cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1571

Containerd:

$ sudo crictl pods
POD ID              CREATED             STATE               NAME                              NAMESPACE           ATTEMPT             RUNTIME
04cf957ac1531       3 seconds ago       Ready               kube-controller-manager-master01   kube-system         1                   (default)
d75cd5e76b434       3 seconds ago       Ready               kube-scheduler-master01            kube-system         1                   (default)
c44c151d4e90f       3 seconds ago       Ready               etcd-master01                      kube-system         1                   (default)
c53795cb8a65e       30 seconds ago      NotReady            etcd-master01                      kube-system         0                   (default)
ac67078ff714c       30 seconds ago      NotReady            kube-controller-manager-master01   kube-system         0                   (default)
2f57f8372bbdb       31 seconds ago      NotReady            kube-scheduler-master01            kube-system         0                   (default)
eb593dda9e84d       31 seconds ago      Ready               kube-apiserver-master01            kube-system         0                   (default)

$ sudo crictl ps --all
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
c36a6a43e7b6b       03fa22539fc1c       2 seconds ago       Running             kube-scheduler            102                 1550c8daef678       kube-scheduler-master01
2bfc762840428       1f99cb6da9a82       2 seconds ago       Running             kube-controller-manager   111                 068836e95b917       kube-controller-manager-master01
c6fba9b7e6041       6cab9d1bed1be       11 seconds ago      Running             kube-apiserver            94                  53d8241dd80e1       kube-apiserver-master01
2f3a471665ded       03fa22539fc1c       21 seconds ago      Exited              kube-scheduler            101                 d75cd5e76b434       kube-scheduler-master01
b834ee6967016       aebe758cef4cd       21 seconds ago      Running             etcd                      77                  c44c151d4e90f       etcd-master01
aecb73212a282       1f99cb6da9a82       21 seconds ago      Exited              kube-controller-manager   110                 04cf957ac1531       kube-controller-manager-master01
364e3c9ce89ed       aebe758cef4cd       49 seconds ago      Exited              etcd                      76                  c53795cb8a65e       etcd-master01
f621d11a4dae7       6cab9d1bed1be       49 seconds ago      Exited              kube-apiserver            93    

Pods and logs:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY   STATUS             RESTARTS          AGE
kube-system   coredns-6d4b75cb6d-7v6wc           0/1     Pending            0                 6m45s
kube-system   coredns-6d4b75cb6d-rsl8p           0/1     Pending            0                 6m45s
kube-system   etcd-master01                      1/1     Running            77 (7m32s ago)    5m50s
kube-system   kube-apiserver-master01            1/1     Running            96 (4m31s ago)    7m34s
kube-system   kube-controller-manager-master01   1/1     Running            113 (2m10s ago)   6m58s
kube-system   kube-scheduler-master01            0/1     CrashLoopBackOff   105 (85s ago)     6m58s

$ kubectl describe pod kube-scheduler-master01 -n kube-system
Name:                 kube-scheduler-master01
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Node:                 master01/10.X.Y.Z
Start Time:           Sun, 11 Sep 2022 10:23:40 +0000
Labels:               component=kube-scheduler
                      tier=control-plane
Annotations:          kubernetes.io/config.hash: 6d24597c996cf56703484904e7c7e3e1
                      kubernetes.io/config.mirror: 6d24597c996cf56703484904e7c7e3e1
                      kubernetes.io/config.seen: 2022-09-11T10:23:40.351191982Z
                      kubernetes.io/config.source: file
                      seccomp.security.alpha.kubernetes.io/pod: runtime/default
Status:               Running
IP:                   10.X.Y.Z
IPs:
  IP:           10.X.Y.Z
Controlled By:  Node/master01
Containers:
  kube-scheduler:
    Container ID:  containerd://d4858467c1451c0f2bbe680133de5d7bddb8c39534c528fa594b61f21c6ea471
    Image:         k8s.gcr.io/kube-scheduler:v1.24.4
    Image ID:      k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2
    Port:          <none>
    Host Port:     <none>
    Command:
      kube-scheduler
      --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
      --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
      --bind-address=127.0.0.1
      --kubeconfig=/etc/kubernetes/scheduler.conf
      --leader-elect=true
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 11 Sep 2022 10:32:34 +0000
      Finished:     Sun, 11 Sep 2022 10:32:50 +0000
    Ready:          False
    Restart Count:  106
    Requests:
      cpu:        100m
    Liveness:     http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=8
    Startup:      http-get https://127.0.0.1:10259/healthz delay=10s timeout=15s period=10s #success=1 #failure=24
    Environment:  <none>
    Mounts:
      /etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  kubeconfig:
    Type:          HostPath (bare host directory volume)
    Path:          /etc/kubernetes/scheduler.conf
    HostPathType:  FileOrCreate
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       :NoExecute op=Exists
Events:
  Type     Reason          Age                     From     Message
  ----     ------          ----                    ----     -------
  Normal   SandboxChanged  8m47s (x3 over 9m27s)   kubelet  Pod sandbox changed, it will be killed and re-created.
  Warning  BackOff         8m37s (x10 over 9m17s)  kubelet  Back-off restarting failed container
  Normal   Pulled          8m22s (x3 over 9m27s)   kubelet  Container image "k8s.gcr.io/kube-scheduler:v1.24.4" already present on machine
  Normal   Created         8m22s (x3 over 9m27s)   kubelet  Created container kube-scheduler
  Normal   Started         8m22s (x3 over 9m27s)   kubelet  Started container kube-scheduler
  Normal   Killing         3m41s (x5 over 9m48s)   kubelet  Stopping container kube-scheduler

Kubernetes version

$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"clean", BuildDate:"2022-08-17T18:54:23Z", GoVersion:"go1.18.5", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.4", GitCommit:"95ee5ab382d64cfe6c28967f36b53970b8374491", GitTreeState:"clean", BuildDate:"2022-08-17T18:47:37Z", GoVersion:"go1.18.5", Compiler:"gc", Platform:"linux/amd64"}

Cloud provider

Bare metal

OS version

# On Linux:
$ cat /etc/os-release
PRETTY_NAME="Ubuntu 22.04.1 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.1 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
$ uname -a
Linux k-m-001 5.15.0-47-generic #51-Ubuntu SMP Thu Aug 11 07:51:15 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

Install tools

By hand and also Ansible

Container runtime (CRI) and version (if applicable)

$ crictl version Version: 0.1.0 RuntimeName: containerd RuntimeVersion: 1.5.9-0ubuntu3 RuntimeApiVersion: v1alpha2

Related plugins (CNI, CSI, …) and versions (if applicable)

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 16 (8 by maintainers)

Most upvoted comments

Should not kubeadm give a better message for understanding that there is a version incompatibility between installed Kubernetes tools and current version of systemd? I spent several hours trying to understand what is failing and was so lucky to scroll this issue to the end to see what can solve the problem sudo apt ugprade systemd

Details
kubeadm GitVersion v1.27.3
kubeadm GitCommit 25b4e43193bcda6c7328a6d147b1fb73a33f1598
Ubuntu 22.04.02
systemd 249 (249.11-0ubuntu3.7) - version before upgrade which was failing
systemd 249 (249.11-0ubuntu3.9) - version after upgrade