kubernetes: coredns in containercreating state : invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

What happened:

`NAME READY STATUS RESTARTS AGE

calico-kube-controllers-568b9cfb7-lrzh7 1/1 Running 0 26h calico-node-49zr8 1/1 Running 0 2d1h

coredns-5d6ccf5984-ct9px 0/1 ContainerCreating 0 45m coredns-5d6ccf5984-szs2s 0/1 ContainerCreating 0 45m coredns-6c6f7bf499-24mkg 0/1 ContainerCreating 0 45m

kube-apiserver-kube-master-1 1/1 Running 0 27h kube-controller-manager-kube-master-1 1/1 Running 0 2d2h kube-proxy-qps8j 1/1 Running 0 2d1h kube-scheduler-kube-master-1 1/1 Running 0 22h`

What you expected to happen:

Coredns to create pods in running state. Currently failing with error,

Warning FailedCreatePodSandBox 70s (x131 over 29m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox “f049cd07dd737cc430ea2186c035d168ac7087bf1700599c79ed2c9dcfa94545”: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

How to reproduce it (as minimally and precisely as possible):

kubeadm init --config /etc/kubernetes/kubeadmcfg.yaml --v=5 --ignore-preflight-errors=all

kubectl apply -f calico_clusterrole.yaml --kubeconfig=/etc/kubernetes/admin.conf kubectl create clusterrolebinding calico-cni --clusterrole=calico-cni --user=calico-cni --kubeconfig=/etc/kubernetes/admin.conf

After cluster initialisation- After all nodes are up :- kubectl apply -f calico_etcd.yaml --kubeconfig=/etc/kubernetes/admin.conf

KUBEADM CONFIG `#!/bin/bash -x

cat > /etc/kubernetes/kubeadmcfg.yaml <<EOF

apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens:

  • token: “xxxxxxxxxxx” description: “another bootstrap token” usages:
    • authentication
    • signing groups:
    • system:bootstrappers:kubeadm:default-node-token kind: InitConfiguration localAPIEndpoint: advertiseAddress: xx.xx.xx.xx bindPort: 6443 nodeRegistration: kubeletExtraArgs: resolv-conf: “/etc/resolv.conf” node-ip: xx.xx.xx.xx kubelet-cgroups: “/system.slice/kubelet.service” container-runtime: remote
      container-runtime-endpoint: unix:///run/containerd/containerd.sock runtime-cgroups: “/system.slice/containerd.service” network-plugin: cni cni-bin-dir: /opt/cni/bin cni-conf-dir: /etc/cni/net.d cgroup-driver: systemd taints:
  • effect: PreferNoSchedule key: node-role.kubernetes.io/master

apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration certificatesDir: “/etc/kubernetes/pki” clusterName: “kubernetes” controllerManager: extraArgs: kubeconfig: “/etc/kubernetes/controller-manager.conf” cluster-name: “kubernetes” cluster-signing-cert-file: “/etc/kubernetes/pki/ca.crt” cluster-signing-key-file: “/etc/kubernetes/pki/ca.key” node-monitor-grace-period: “60s” node-monitor-period: “10s” node-startup-grace-period: “1m30s” bind-address: “0.0.0.0” secure-port: “10257” deployment-controller-sync-period: “60s” leader-elect: “true” leader-elect-lease-duration: “30s” leader-elect-renew-deadline: “20s” service-account-private-key-file: “/etc/kubernetes/pki/sa.key” service-cluster-ip-range: “10.96.0.0/24” controllers: “*,bootstrapsigner,tokencleaner” controller-start-interval: “60s” feature-gates: “TTLAfterFinished=true” root-ca-file: “/etc/kubernetes/pki/ca.crt” authorization-kubeconfig: “/etc/kubernetes/controller-manager.conf” authentication-kubeconfig : “/etc/kubernetes/controller-manager.conf” v: “4” etcd: external: endpoints: - “https://kube-etcd-1:2379” - “https://kube-etcd-2:2379” - “https://kube-etcd-3:2379” caFile: “/etc/kubernetes/pki/etcd/ca.crt” certFile: “/etc/kubernetes/pki/etcd/server.crt” keyFile: “/etc/kubernetes/pki/etcd/server.key” imageRepository: XXXX kubernetesVersion: v1.20.0 controlPlaneEndpoint: “kube-master-1:6443” networking: dnsDomain: “cluster.local” serviceSubnet: “10.96.0.0/16” podSubnet: “192.168.0.0/16” scheduler: extraArgs: secure-port: “10259” bind-address: “0.0.0.0” leader-elect: “true” feature-gates: “TTLAfterFinished=true” apiServer: certSANs: - “kubernetes” - “kubernetes.default” - “kubernetes.default.svc” - “kubernetes.default.svc.cluster” - “kubernetes.default.svc.cluster.local” timeoutForControlPlane: “12m0s” extraArgs: v: “4” external-hostname: “kube-master-1” allow-privileged: “true” anonymous-auth: “true” bind-address: “0.0.0.0” apiserver-count: “3” etcd-servers: “https://kube-etcd-1:2379,https://kube-etcd-2:2379,https://kube-etcd-3:2379” etcd-cafile: “/etc/kubernetes/pki/etcd/ca.crt” etcd-certfile: “/etc/kubernetes/pki/etcd/server.crt” etcd-keyfile: “/etc/kubernetes/pki/etcd/server.key”

apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration evictionHard: memory.available: “200Mi” clusterDomain: cluster.local FeatureGates: TTLAfterFinished: true cgroupDriver: systemd staticPodPath: /etc/kubernetes/manifests failSwapOn: false EOF `

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

Client Version: version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.0”, GitCommit:“af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38”, GitTreeState:“clean”, BuildDate:“2020-12-08T17:59:43Z”, GoVersion:“go1.15.5”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.0”, GitCommit:“af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38”, GitTreeState:“clean”, BuildDate:“2020-12-08T17:51:19Z”, GoVersion:“go1.15.5”, Compiler:“gc”, Platform:“linux/amd64”}

  • Cloud provider or hardware configuration: AWS

  • OS (e.g: cat /etc/os-release):

NAME=“Red Hat Enterprise Linux Server” VERSION=“7.9 (Maipo)” ID=“rhel” ID_LIKE=“fedora” VARIANT=“Server” VARIANT_ID=“server” VERSION_ID=“7.9” PRETTY_NAME=“Red Hat Enterprise Linux Server 7.9 (Maipo)” ANSI_COLOR=“0;31” CPE_NAME=“cpe:/o:redhat:enterprise_linux:7.9:GA:server” HOME_URL=“https://www.redhat.com/” BUG_REPORT_URL=“https://bugzilla.redhat.com/

REDHAT_BUGZILLA_PRODUCT=“Red Hat Enterprise Linux 7” REDHAT_BUGZILLA_PRODUCT_VERSION=7.9 REDHAT_SUPPORT_PRODUCT=“Red Hat Enterprise Linux” REDHAT_SUPPORT_PRODUCT_VERSION=“7.9”

  • Kernel (e.g. uname -a): Linux kube-master-1 3.10.0-1160.15.2.el7.x86_64 #1 SMP Thu Jan 21 16:15:07 EST 2021 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools: kubeadm version kubeadm version: &version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.0”, GitCommit:“af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38”, GitTreeState:“clean”, BuildDate:“2020-12-08T17:57:36Z”, GoVersion:“go1.15.5”, Compiler:“gc”, Platform:“linux/amd64”}

  • Network plugin and version (if this is a network-related bug): calico cni v1.8.3, coredns v1.7.0, kubernetes v.1.20

  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 23 (11 by maintainers)

Most upvoted comments

@amarjothi Did you check if Calico is working properly? I guess this error(invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable) is reported by Calico CNI. You can check Calico logs firstly:

kubectl --kubeconfig=/etc/kubernetes/admin.conf logs -f  calico-kube-controllers-568b9cfb7-l2bb2 -n=kube-system 
kubectl --kubeconfig=/etc/kubernetes/admin.conf logs -f  calico-node-6g5jm -n=kube-system 

When Calico CNI is ready, the CoreDNS Pods will run properly.

i asked @amarjothi to log this ticked due to:

Warning FailedCreatePodSandBox 70s (x131 over 29m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox “f049cd07dd737cc430ea2186c035d168ac7087bf1700599c79ed2c9dcfa94545”: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable

its not clear to me why the kubelet pod sandbox throws this error from client-go about KUBERNETES_MASTER.

/sig node