kubeadm: kubeadm upgrade plan looks for node kubernetes-admin, which doesn't exist

Is this a request for help?

yes

What keywords did you search in kubeadm issues before filing this one?

FATAL: the ConfigMap "kubeadm-config" in the kube-system namespace used for getting configuration information was not found

Is this a BUG REPORT or FEATURE REQUEST?

BUG

Versions

kubeadm version (use kubeadm version):

kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.4", GitCommit:"d360454c9bcd1634cf4cc52d1867af5491dc9c5f", GitTreeState:"clean", BuildDate:"2020-11-11T13:15:05Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
  • Cloud provider or hardware configuration:

on-prem vms

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.4 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.4 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel (e.g. uname -a):

Linux k8s-login-master 4.15.0-29-generic #31-Ubuntu SMP Tue Jul 17 15:39:52 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Others:

What happened?

I received the following output:

 kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade/config] In order to upgrade, a ConfigMap called "kubeadm-config" in the kube-system namespace must exist.
[upgrade/config] Without this information, 'kubeadm upgrade' won't know how to configure your upgraded cluster.

[upgrade/config] Next steps:
	- OPTION 1: Run 'kubeadm config upload from-flags' and specify the same CLI arguments you passed to 'kubeadm init' when you created your control-plane.
	- OPTION 2: Run 'kubeadm config upload from-file' and specify the same config file you passed to 'kubeadm init' when you created your control-plane.
	- OPTION 3: Pass a config file to 'kubeadm upgrade' using the --config flag.

[upgrade/config] FATAL: the ConfigMap "kubeadm-config" in the kube-system namespace used for getting configuration information was not found
To see the stack trace of this error execute with --v=5 or higher
root@k8s-login-master:~# kubeadm upgrade plan --v=11
I1117 14:11:10.981960   19193 plan.go:69] [upgrade/plan] verifying health of cluster
I1117 14:11:10.982033   19193 plan.go:70] [upgrade/plan] retrieving configuration from cluster
I1117 14:11:10.983185   19193 loader.go:375] Config loaded from file:  /etc/kubernetes/admin.conf
I1117 14:11:10.985377   19193 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.19.4 (linux/amd64) kubernetes/d360454" 'https://192.168.2.109:6443/apis/apps/v1/namespaces/kube-system/daemonsets/self-hosted-kube-apiserver?timeout=10s'
I1117 14:11:11.002932   19193 round_trippers.go:443] GET https://192.168.2.109:6443/apis/apps/v1/namespaces/kube-system/daemonsets/self-hosted-kube-apiserver?timeout=10s 404 Not Found in 17 milliseconds
I1117 14:11:11.003190   19193 round_trippers.go:449] Response Headers:
I1117 14:11:11.003328   19193 round_trippers.go:452]     Content-Type: application/json
I1117 14:11:11.003463   19193 round_trippers.go:452]     Content-Length: 252
I1117 14:11:11.003608   19193 round_trippers.go:452]     Date: Tue, 17 Nov 2020 19:11:11 GMT
I1117 14:11:11.003777   19193 request.go:1097] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"daemonsets.apps \"self-hosted-kube-apiserver\" not found","reason":"NotFound","details":{"name":"self-hosted-kube-apiserver","group":"apps","kind":"daemonsets"},"code":404}
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
I1117 14:11:11.004892   19193 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.19.4 (linux/amd64) kubernetes/d360454" 'https://192.168.2.109:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s'
I1117 14:11:11.007724   19193 round_trippers.go:443] GET https://192.168.2.109:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s 200 OK in 2 milliseconds
I1117 14:11:11.007758   19193 round_trippers.go:449] Response Headers:
I1117 14:11:11.007771   19193 round_trippers.go:452]     Date: Tue, 17 Nov 2020 19:11:11 GMT
I1117 14:11:11.007782   19193 round_trippers.go:452]     Content-Type: application/json
I1117 14:11:11.007922   19193 round_trippers.go:452]     Content-Length: 976
I1117 14:11:11.008049   19193 request.go:1097] Response Body: {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"kubeadm-config","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/configmaps/kubeadm-config","uid":"e35f3cf8-5d15-4ecc-b81f-f52045718cb1","resourceVersion":"36311945","creationTimestamp":"2019-10-10T14:30:54Z"},"data":{"ClusterConfiguration":"apiServer:\n  extraArgs:\n    authorization-mode: Node,RBAC\n  timeoutForControlPlane: 4m0s\napiVersion: kubeadm.k8s.io/v1beta2\ncertificatesDir: /etc/kubernetes/pki\nclusterName: kubernetes\ncontrollerManager: {}\ndns:\n  type: CoreDNS\netcd:\n  local:\n    dataDir: /var/lib/etcd\nimageRepository: k8s.gcr.io\nkind: ClusterConfiguration\nkubernetesVersion: v1.18.0\nnetworking:\n  dnsDomain: cluster.local\n  podSubnet: 10.244.0.0/16\n  serviceSubnet: 10.96.0.0/12\nscheduler: {}\n","ClusterStatus":"apiEndpoints:\n  k8s-login-master:\n    advertiseAddress: 192.168.2.109\n    bindPort: 6443\napiVersion: kubeadm.k8s.io/v1beta2\nkind: ClusterStatus\n"}}
I1117 14:11:11.012135   19193 loader.go:375] Config loaded from file:  /etc/kubernetes/kubelet.conf
I1117 14:11:11.012549   19193 round_trippers.go:423] curl -k -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.19.4 (linux/amd64) kubernetes/d360454" 'https://192.168.2.109:6443/api/v1/nodes/kubernetes-admin?timeout=10s'
I1117 14:11:11.015484   19193 round_trippers.go:443] GET https://192.168.2.109:6443/api/v1/nodes/kubernetes-admin?timeout=10s 404 Not Found in 2 milliseconds
I1117 14:11:11.015572   19193 round_trippers.go:449] Response Headers:
I1117 14:11:11.015597   19193 round_trippers.go:452]     Content-Length: 202
I1117 14:11:11.015615   19193 round_trippers.go:452]     Date: Tue, 17 Nov 2020 19:11:11 GMT
I1117 14:11:11.015641   19193 round_trippers.go:452]     Content-Type: application/json
I1117 14:11:11.015711   19193 request.go:1097] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"kubernetes-admin\" not found","reason":"NotFound","details":{"name":"kubernetes-admin","kind":"nodes"},"code":404}
[upgrade/config] In order to upgrade, a ConfigMap called "kubeadm-config" in the kube-system namespace must exist.
[upgrade/config] Without this information, 'kubeadm upgrade' won't know how to configure your upgraded cluster.

[upgrade/config] Next steps:
	- OPTION 1: Run 'kubeadm config upload from-flags' and specify the same CLI arguments you passed to 'kubeadm init' when you created your control-plane.
	- OPTION 2: Run 'kubeadm config upload from-file' and specify the same config file you passed to 'kubeadm init' when you created your control-plane.
	- OPTION 3: Pass a config file to 'kubeadm upgrade' using the --config flag.

the ConfigMap "kubeadm-config" in the kube-system namespace used for getting configuration information was not found
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:150
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:71
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdPlan.func1
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:57
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374
[upgrade/config] FATAL
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.enforceRequirements
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/common.go:152
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.runPlan
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:71
k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade.NewCmdPlan.func1
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/upgrade/plan.go:57
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:842
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
k8s.io/kubernetes/cmd/kubeadm/app.Run
	/workspace/anago-v1.19.4-rc.0.51+5f1e5cafd33a88/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
	_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
	/usr/local/go/src/runtime/proc.go:204
runtime.goexit
	/usr/local/go/src/runtime/asm_amd64.s:1374

I don’t know why kubeadm thinks there’s a node called kubernetes-admin, it’s not referenced anywhere.

What you expected to happen?

kubeadm upgrade plan to complete

How to reproduce it (as minimally and precisely as possible)?

followed instructions from https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

Anything else we need to know?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 16 (9 by maintainers)

Most upvoted comments

@neolit123 this worked perfectly. thanks!

i’m going to close this as unsupported scenario, but please ask if you have any further questions. happy to help.