kubeadm: kubeadm init error marking master: timed out waiting for the condition
Versions
kubeadm version (use kubeadm version):
kubeadm version: &version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.1”, GitCommit:“b1b29978270dc22fecc592ac55d903350454310a”, GitTreeState:“clean”, BuildDate:“2018-07-17T18:50:16Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
-
Kubernetes version (use
kubectl version): Client Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.1”, GitCommit:“b1b29978270dc22fecc592ac55d903350454310a”, GitTreeState:“clean”, BuildDate:“2018-07-17T18:53:20Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“11”, GitVersion:“v1.11.1”, GitCommit:“b1b29978270dc22fecc592ac55d903350454310a”, GitTreeState:“clean”, BuildDate:“2018-07-17T18:43:26Z”, GoVersion:“go1.10.3”, Compiler:“gc”, Platform:“linux/amd64”} -
OS (e.g. from /etc/os-release): CentOS 7.1
-
Kernel (e.g.
uname -a): Linux master1 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux -
Docker Docker version 17.03.1-ce, build c6d412e
What happened?
When i used kubeadm init to creating a single cluster, it ended up with an error as follow.
[root@master1 kubeadm]# kubeadm init --apiserver-advertise-address=172.16.6.64 --kubernetes-version=v1.11.1 --pod-network-cidr=192.168.0.0/16
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0904 14:29:33.474299 28529 kernel_validator.go:81] Validating kernel version
I0904 14:29:33.474529 28529 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.6.64]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [172.16.6.64 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 23.503472 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.11" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node master1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
error marking master: timed out waiting for the condition
But, all docker containers were work fine.
[root@master1 kubeadm]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
53886ee1db02 272b3a60cd68 "kube-scheduler --..." 5 minutes ago Up 5 minutes k8s_kube-scheduler_kube-schedu
05f9e74cb1ae b8df3b177be2 "etcd --advertise-..." 5 minutes ago Up 5 minutes k8s_etcd_etcd-master1_kube-sys
ac00773b050d 52096ee87d0e "kube-controller-m..." 5 minutes ago Up 5 minutes k8s_kube-controller-manager_ku
ebeae2ea255b 816332bd9d11 "kube-apiserver --..." 5 minutes ago Up 5 minutes k8s_kube-apiserver_kube-apiser
74a0d0b1346e k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_etcd-master1_kube-syst
b693b16e39cc k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-scheduler-master1
0ce92c0afa62 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-controller-manage
c43f05f27c01 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-apiserver-master
And, it’s so weird that when i added --dry-run option it work out.
[root@master1 kubeadm]# kubeadm init --apiserver-advertise-address 172.16.6.64 --pod-network-cidr=192.168.0.0/16 --node-name=master1 --dry-run --kubernetes-version=v1.11.1
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0904 16:07:56.101221 23703 kernel_validator.go:81] Validating kernel version
I0904 16:07:56.101565 23703 kernel_validator.go:96] Validating kernel config
[preflight/images] Would pull the required images (like 'kubeadm config images pull')
[kubelet] Writing kubelet environment file with flags to file "/tmp/kubeadm-init-dryrun016982898/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/tmp/kubeadm-init-dryrun016982898/config.yaml"
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.6.64]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [172.16.6.64 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun016982898"
[kubeconfig] Wrote KubeConfig file to disk: "/tmp/kubeadm-init-dryrun016982898/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/tmp/kubeadm-init-dryrun016982898/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/tmp/kubeadm-init-dryrun016982898/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/tmp/kubeadm-init-dryrun016982898/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/tmp/kubeadm-init-dryrun016982898/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/tmp/kubeadm-init-dryrun016982898/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/tmp/kubeadm-init-dryrun016982898/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/tmp/kubeadm-init-dryrun016982898/etcd.yaml"
[dryrun] wrote certificates, kubeconfig files and control plane manifests to the "/tmp/kubeadm-init-dryrun016982898" directory
[dryrun] the certificates or kubeconfig files would not be printed due to their sensitive nature
[dryrun] please examine the "/tmp/kubeadm-init-dryrun016982898" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
...
[markmaster] Marking the node master1 as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node master1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "master1"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "master1"
[dryrun] Attached patch:
{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master"}]}}
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master1" as an annotation
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "master1"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "master1"
[dryrun] Attached patch:
{"metadata":{"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/dockershim.sock"}}}
[bootstraptoken] using token: 3gvy0t.amka3xc9u1oljlla
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-3gvy0t"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
apiVersion: v1
data:
auth-extra-groups: c3lzdGVtOmJvb3RzdHJhcHBlcnM6a3ViZWFkbTpkZWZhdWx0LW5vZGUtdG9rZW4=
description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
expiration: MjAxOC0wOS0wNVQxNjowODowNSswODowMA==
token-id: M2d2eTB0
token-secret: YW1rYTN4Yzl1MW9samxsYQ==
usage-bootstrap-authentication: dHJ1ZQ==
usage-bootstrap-signing: dHJ1ZQ==
kind: Secret
metadata:
creationTimestamp: null
name: bootstrap-token-3gvy0t
namespace: kube-system
type: bootstrap.kubernetes.io/token
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:kubelet-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-bootstrapper
subjects:
- kind: Group
name: system:bootstrappers:kubeadm:default-node-token
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-autoapprove-bootstrap
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- kind: Group
name: system:bootstrappers:kubeadm:default-node-token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1"
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-autoapprove-certificate-rotation
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- kind: Group
name: system:nodes
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
...
[dryrun] Attached object:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
creationTimestamp: null
name: kubeadm:node-proxier
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:node-proxier
subjects:
- kind: ServiceAccount
name: kube-proxy
namespace: kube-system
[addons] Applied essential addon: kube-proxy
[dryrun] finished dry-running successfully. Above are the resources that would be created
What you expected to happen?
How could i solve this problem, and create a single master cluster with kubeadm.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 23 (7 by maintainers)
$ kubeadm reset $ ifconfig cni0 down && ip link delete cni0 $ ifconfig flannel.1 down && ip link delete flannel.1 $ rm -rf /var/lib/cni/
good luck!
We are adding a separate timeout to the config in 1.13. Closing this issue.
@timothysc Perhaps you can share the issue/PR which prompted this to be closed?
@heng-Yuan - I’d make certain SELinux is disabled FWIW.
best to also include:
journalctl -xeu kubeletHi @heng-Yuan and thanks for filing this issue!
Can you check the state and logs of kubelet and the API server container (of course you can filter out any information you deem sensitive):
Note that
ebeae2ea255bis your API server container ID.