kubernetes: kubeadm: CoreDNS creation fails
What happened: CoreDNS add-on creation fails:
I0911 16:29:27.735769 490 initconfiguration.go:190] loading configuration from "/etc/kubernetes/kubeadm-config.yaml"
I0911 16:29:27.744709 490 interface.go:384] Looking for default routes with IPv4 addresses
I0911 16:29:27.744729 490 interface.go:389] Default route transits interface "eth0"
I0911 16:29:27.744985 490 interface.go:196] Interface eth0 is up
I0911 16:29:27.745035 490 interface.go:244] Interface "eth0" has 1 addresses :[10.5.0.2/24].
I0911 16:29:27.745057 490 interface.go:211] Checking addr 10.5.0.2/24.
I0911 16:29:27.745068 490 interface.go:218] IP found 10.5.0.2
I0911 16:29:27.745081 490 interface.go:250] Found valid IPv4 address 10.5.0.2 for interface "eth0".
I0911 16:29:27.745088 490 interface.go:395] Found active IP 10.5.0.2
I0911 16:29:27.745355 490 feature_gate.go:216] feature gates: &{map[]}
[init] Using Kubernetes version: v1.16.0-rc.1
[preflight] Running pre-flight checks
I0911 16:29:27.745950 490 checks.go:578] validating Kubernetes and kubeadm version
I0911 16:29:27.745983 490 checks.go:167] validating if the firewall is enabled and active
[WARNING Firewalld]: no supported init system detected, skipping checking for services
I0911 16:29:27.746124 490 checks.go:202] validating availability of port 6443
I0911 16:29:27.746344 490 checks.go:202] validating availability of port 10251
I0911 16:29:27.746372 490 checks.go:202] validating availability of port 10252
I0911 16:29:27.746394 490 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0911 16:29:27.746441 490 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0911 16:29:27.746455 490 checks.go:287] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0911 16:29:27.746464 490 checks.go:287] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0911 16:29:27.746477 490 checks.go:433] validating if the connectivity type is via proxy or direct
I0911 16:29:27.746514 490 checks.go:472] validating http connectivity to first IP address in the CIDR
I0911 16:29:27.746528 490 checks.go:472] validating http connectivity to first IP address in the CIDR
I0911 16:29:27.746537 490 checks.go:103] validating the container runtime
I0911 16:29:27.758339 490 checks.go:377] validating the presence of executable crictl
I0911 16:29:27.758430 490 checks.go:336] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
I0911 16:29:27.758519 490 checks.go:336] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0911 16:29:27.758592 490 checks.go:650] validating whether swap is enabled or not
I0911 16:29:27.758657 490 checks.go:377] validating the presence of executable ip
I0911 16:29:27.758878 490 checks.go:377] validating the presence of executable iptables
I0911 16:29:27.758986 490 checks.go:377] validating the presence of executable mount
I0911 16:29:27.759080 490 checks.go:377] validating the presence of executable nsenter
I0911 16:29:27.759144 490 checks.go:377] validating the presence of executable ebtables
I0911 16:29:27.759220 490 checks.go:377] validating the presence of executable ethtool
I0911 16:29:27.759311 490 checks.go:377] validating the presence of executable socat
I0911 16:29:27.759377 490 checks.go:377] validating the presence of executable tc
I0911 16:29:27.759455 490 checks.go:377] validating the presence of executable touch
I0911 16:29:27.759556 490 checks.go:521] running all checks
I0911 16:29:27.841271 490 checks.go:407] checking whether the given node name is reachable using net.LookupHost
I0911 16:29:27.842230 490 checks.go:619] validating kubelet version
I0911 16:29:28.078752 490 checks.go:129] validating if the service is enabled and active
[WARNING Service-Kubelet]: no supported init system detected, skipping checking for services
I0911 16:29:28.078885 490 checks.go:202] validating availability of port 10250
I0911 16:29:28.079011 490 checks.go:202] validating availability of port 2379
I0911 16:29:28.079050 490 checks.go:202] validating availability of port 2380
I0911 16:29:28.079087 490 checks.go:250] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0911 16:29:28.090613 490 checks.go:839] image exists: k8s.gcr.io/hyperkube:v1.16.0-rc.1
I0911 16:29:28.101405 490 checks.go:845] pulling k8s.gcr.io/pause:3.1
I0911 16:29:29.097472 490 checks.go:845] pulling k8s.gcr.io/etcd:3.3.15-0
I0911 16:29:37.445718 490 checks.go:845] pulling k8s.gcr.io/coredns:1.6.2
I0911 16:29:39.342274 490 kubelet.go:61] Stopping the kubelet
[kubelet-start] no supported init system detected, won't make sure the kubelet not running for a short period of time while setting up configuration for it.
W0911 16:29:39.342457 490 flags.go:110] cannot determine if systemd-resolved is active: no supported init system detected, skipping checking for services
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
I0911 16:29:39.345869 490 kubelet.go:79] Starting the kubelet
[kubelet-start] no supported init system detected, won't make sure the kubelet is running properly.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
I0911 16:29:39.352323 490 certs.go:70] creating a new public/private key files for signing service account users
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0911 16:29:39.352488 490 kubeconfig.go:79] creating kubeconfig file for admin.conf
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
I0911 16:29:39.691273 490 kubeconfig.go:79] creating kubeconfig file for kubelet.conf
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0911 16:29:39.989388 490 kubeconfig.go:79] creating kubeconfig file for controller-manager.conf
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0911 16:29:40.181583 490 kubeconfig.go:79] creating kubeconfig file for scheduler.conf
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0911 16:29:40.548856 490 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[controlplane] Adding extra host path mount "encryptionconfig" to "kube-apiserver"
I0911 16:29:40.560322 490 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0911 16:29:40.560356 490 manifests.go:91] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[controlplane] Adding extra host path mount "encryptionconfig" to "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[controlplane] Adding extra host path mount "encryptionconfig" to "kube-apiserver"
I0911 16:29:40.562391 490 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
I0911 16:29:40.562420 490 manifests.go:91] [control-plane] getting StaticPodSpecs
I0911 16:29:40.563638 490 manifests.go:116] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0911 16:29:40.564835 490 local.go:69] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0911 16:29:40.564857 490 waitcontrolplane.go:80] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 0s
I0911 16:29:43.568887 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:45.069160 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:46.070032 490 request.go:801] Got a Retry-After 1s response for attempt 2 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:47.071031 490 request.go:801] Got a Retry-After 1s response for attempt 3 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:49.069085 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:50.569250 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:52.569219 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:53.570622 490 request.go:801] Got a Retry-After 1s response for attempt 2 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:54.571457 490 request.go:801] Got a Retry-After 1s response for attempt 3 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:56.069074 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:57.069982 490 request.go:801] Got a Retry-After 1s response for attempt 2 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:29:58.070892 490 request.go:801] Got a Retry-After 1s response for attempt 3 to https://10.5.0.2:443/healthz?timeout=32s
I0911 16:30:02.069130 490 request.go:801] Got a Retry-After 1s response for attempt 1 to https://10.5.0.2:443/healthz?timeout=32s
[apiclient] All control plane components are healthy after 31.004144 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I0911 16:30:11.571499 490 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
I0911 16:30:11.600060 490 uploadconfig.go:122] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
I0911 16:30:11.613784 490 uploadconfig.go:127] [upload-config] Preserving the CRISocket information for the control-plane node
I0911 16:30:11.613810 490 patchnode.go:30] [patchnode] Uploading the CRI Socket information "/run/containerd/containerd.sock" to the Node API object "master-1" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master-1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master-1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I0911 16:30:12.666145 490 clusterinfo.go:45] [bootstrap-token] loading admin kubeconfig
I0911 16:30:12.667669 490 clusterinfo.go:53] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I0911 16:30:12.668576 490 clusterinfo.go:65] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I0911 16:30:12.672863 490 clusterinfo.go:79] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
deployments.apps "coredns" not found
error execution phase addon/coredns
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:237
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:424
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:209
k8s.io/kubernetes/cmd/kubeadm/app/cmd.NewCmdInit.func1
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:146
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:830
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:914
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:864
k8s.io/kubernetes/cmd/kubeadm/app.Run
/workspace/anago-v1.16.0-beta.2.58+d17cd235699328/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
/usr/local/go/src/runtime/proc.go:200
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1337
It feels a bit racy. This doesn’t happen on every run.
What you expected to happen: CoreDNS add-on to be created.
How to reproduce it (as minimally and precisely as possible):
kubeadm init using rc.1.
Anything else we need to know?:
Environment:
- Kubernetes version (use
kubectl version): - Cloud provider or hardware configuration:
- OS (e.g:
cat /etc/os-release): - Kernel (e.g.
uname -a):5.2.8 - Install tools:
- Network plugin and version (if this is a network-related bug):
- Others:
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 20 (20 by maintainers)
Forgot to mention: After deleting the CoreDNS configmap, I was able to re-install CoreDNS using kubeadm.
probably best to wait for the api-server and etcd pods to be ready and not the node object to appear.
let’s close this as it’s not a regression. thanks for confirming. /close
In talos we do not use
systemd, it is an init system we wrote.i will do some testing myself in about 30 minutes.
this part is trying to get the deployment, but it’s not there yet.