kubernetes: kube-dns ContainerCreating /run/flannel/subnet.env no such file

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

BUG REPORT

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.): kube-dns kubernetes setupnetworkerror flannel subnet.env no such file


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:48:38Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.4", GitCommit:"3b417cc4ccd1b8f38ff9ec96bb50a81ca0ea9d56", GitTreeState:"clean", BuildDate:"2016-10-21T02:42:39Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:

VMWare Fusion for Mac

  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04.1 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.1 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
  • Kernel (e.g. uname -a):
Linux ubuntu-master 4.4.0-47-generic #68-Ubuntu SMP Wed Oct 26 19:39:52 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:

  • Others:

What happened:

kube-system   kube-dns-654381707-w4mpg                0/3       ContainerCreating   0          2m
FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason          Message
  ---------     --------        -----   ----                    -------------   --------        ------          -------
  3m            3m              1       {default-scheduler }                    Normal          Scheduled       Successfully assigned kube-dns-654381707-w4mpg to ubuntu-master
  2m            1s              177     {kubelet ubuntu-master}                 Warning         FailedSync      Error syncing pod, skipping: failed to "SetupNetwork" for "kube-dns-654381707-w4mpg_kube-system" with SetupNetworkError: "Failed to setup network for pod \"kube-dns-654381707-w4mpg_kube-system(8ffe3172-a739-11e6-871f-000c2912631c)\" using network plugins \"cni\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

What you expected to happen:

kube-dns Running

How to reproduce it (as minimally and precisely as possible):

root@ubuntu-master:~# kubeadm init
Running pre-flight checks
<master/tokens> generated token: "247a8e.b7c8c1a7685bf204"
<master/pki> generated Certificate Authority key and certificate:
Issuer: CN=kubernetes | Subject: CN=kubernetes | CA: true
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2026-11-08 11:40:21 +0000 UTC
Public: /etc/kubernetes/pki/ca-pub.pem
Private: /etc/kubernetes/pki/ca-key.pem
Cert: /etc/kubernetes/pki/ca.pem
<master/pki> generated API Server key and certificate:
Issuer: CN=kubernetes | Subject: CN=kube-apiserver | CA: false
Not before: 2016-11-10 11:40:21 +0000 UTC Not After: 2017-11-10 11:40:21 +0000 UTC
Alternate Names: [172.20.10.4 10.96.0.1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local]
Public: /etc/kubernetes/pki/apiserver-pub.pem
Private: /etc/kubernetes/pki/apiserver-key.pem
Cert: /etc/kubernetes/pki/apiserver.pem
<master/pki> generated Service Account Signing keys:
Public: /etc/kubernetes/pki/sa-pub.pem
Private: /etc/kubernetes/pki/sa-key.pem
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 14.053453 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 0.508561 seconds
<master/apiclient> attempting a test deployment
<master/apiclient> test deployment succeeded
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 1.503838 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can now join any number of machines by running the following on each node:

kubeadm join --token=247a8e.b7c8c1a7685bf204 172.20.10.4
root@ubuntu-master:~# 
root@ubuntu-master:~# 
root@ubuntu-master:~# 
root@ubuntu-master:~# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY     STATUS              RESTARTS   AGE
kube-system   dummy-2088944543-eo1ua                  1/1       Running             0          47s
kube-system   etcd-ubuntu-master                      1/1       Running             3          51s
kube-system   kube-apiserver-ubuntu-master            1/1       Running             0          49s
kube-system   kube-controller-manager-ubuntu-master   1/1       Running             3          51s
kube-system   kube-discovery-1150918428-qmu0b         1/1       Running             0          46s
kube-system   kube-dns-654381707-mv47d                0/3       ContainerCreating   0          44s
kube-system   kube-proxy-k0k9q                        1/1       Running             0          44s
kube-system   kube-scheduler-ubuntu-master            1/1       Running             3          51s
root@ubuntu-master:~# 
root@ubuntu-master:~# 
root@ubuntu-master:~# 
root@ubuntu-master:~# kubectl apply -f https://git.io/weave-kube
daemonset "weave-net" created
root@ubuntu-master:~# 
root@ubuntu-master:~# 
root@ubuntu-master:~# 

Anything else do we need to know:

First time I ran

root@ubuntu-master:~# kubeadm init

and then downloaded kube-flannel.yml and apply

root@ubuntu-master:~# kubectl apply -f kube-flannel.yml

And then tried to join nodes. Then I reset configuration by

root@ubuntu-master:~# kubeadm reset
root@ubuntu-master:~# rm -rf .kube/

Then tried to initialize kubernetes again using weave.

About this issue

  • Original URL
  • State: closed
  • Created 8 years ago
  • Comments: 22 (4 by maintainers)

Most upvoted comments

@burnyd, You must install a pod network. I had configured and setup couple of k8s clusters successfully using kubeadm with flannel and weave as pod network. Here are my suggestions and couple of troubleshooting tips. Hope this helps. If you can paste your pod logs or events that would help debug your problem further.

  1. If you want to use flannel CNI, make sure you use --pod-network-cidr to kubeadm init on the master node as below kubeadm init --pod-network-cidr=10.244.0.0/16

  2. Install a pod network When using CNI, you must install a pod network add-on for pods to communicate with each other. This should be done before joining minions to the cluster. The kube-dns will wait in ContainerCreating status until a pod network is installed. You can choose any one of the add-ons that meets your needs. For example, if using flannel download the flannel yaml file and run as follows. You just need to run it only on the master node. kubectl apply -f flannel.yaml

  3. Make sure your network pod (s) (in this example flannel) are running. Once flannel network pod is up running, kube-dns pod will move to running state as well. kubectl get pods -n kube-system -o=wide NAME READY STATUS RESTARTS AGE IP NODE dummy-2088944543-iefa6 1/1 Running 0 4h
    etcd-node1 1/1 Running 0 4h
    kube-apiserver-node1 1/1 Running 0 4h
    kube-controller-manager-node1 1/1 Running 0 4h
    kube-discovery-1150918428-l7w0n 1/1 Running 0 4h
    kube-dns-654381707-mzpc5 2/3 Running 0 4h 10.244.0.2 kube-flannel-ds-593w5 2/2 Running 0 1m
    kube-proxy-94zjz 1/1 Running 0 4h
    kube-scheduler-node1 1/1 Running 0 4h

If flannel pod is not running clean, you need to look at the pod’s logs to figure out what is going on. Few things you can check while debugging flannel pod startup or pod network communication issues a) make sure your cni plugin binaries are in place in /opt/cni/bin. You should see corresponding binaries for each CNI add-on b) Make sure the CNI configuration file for the network add-on is in place under /etc/cni/net.d [root@node1]# ls /etc/cni/net.d 10-flannel.conf c) run ifconfig to check docker, flannel bridge and virtual interfaces are up

If flannel is up and running but kube-dns still spitting out “cni: cni config unintialized; Skipping pod” errors then you have to check each containers [kube-dns dnsmasq healthz] logs in kube-dns pod to see what is going on.

I had both flannel and weave-net in cni folder. So I removed /etc/cni/net.d/flannel and kept only weave-net and it helped.

I can confirm that passing the --pod-network-cidr= flag to kubeadm works for me and has allowed me to spin up several clusters in AWS using Flannel as the pod network.

I had this problem as well. Solution in last line.

Issue: The issue was the tutorial I was following suggested to install flannel yml from github, which (as of writing) uses default network: 10.244.0.0/16 Where as my tutorial I was following used network: 172.30.0.0/16

This resulted in the following errors: kubectl get pods --all-namespaces kube-system coredns-576cbf47c7-8frqd 0/1 CrashLoopBackOff 6 7m14s kube-system coredns-576cbf47c7-vz87b 0/1 CrashLoopBackOff 6 7m14s

kubectl describe pods coredns-576cbf47c7-8frqd /run/flannel/subnet.env: no such file or directory

kubectl logs -n kube-system coredns-576cbf47c7-8frqd 2018/11/27 00:03:16 [FATAL] plugin/loop: Seen “HINFO IN 8845078766411623665.6668541419380549736.” more than twice, loop detected

See inconsistency in installation: kubeadm init --pod-network-cidr=172.30.0.0/16 Executing kubeadm init on network 172.30.0.0/16 … [certificates] apiserver serving cert is signed for DNS names [kubert.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.57]

Solution: Either reconsider the network you will use with the --pod-network-cidr= flag, or download the flannel yml and edit the network configuration in there

@RobinLe , Glad to hear my suggestion were helpful and you were able to get kubernetes up and running. I am working on writing an article on Kubenernestes : Troubleshooting and debugging. Hoping to finish this up soon and publish it.