flannel: k8s 1.19.0 with kube-flannel 0.12 Error registering network: failed to acquire lease: node "nodeName" pod cidr not assigned

Expected Behavior

Launching the Master node

Current Behavior

The master node can not start

Possible Solution

Steps to Reproduce (for bugs)

  1. Launch the prime master node with version 1.19.0 on bare metal and using the latest stable flannel version
  2. Since the node will not start check the logs to see the error

Context

I am trying to use flannel as a main network pod.

Your Environment

  • Flannel version: v0.12.0-amd64
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version:
  • Kubernetes version (if used): v1.19.0
  • Operating System and version: Red Hat Enterprise Linux Server release 7.8 (Maipo)
  • Link to your project (optional):

Logs of error:

$ kubectl -n kube-system logs kube-flannel-ds-amd64-bc259
I0909 14:27:25.334093       1 main.go:518] Determining IP address of default interface
I0909 14:27:25.427908       1 main.go:531] Using interface with name ens192 and address IP
I0909 14:27:25.427951       1 main.go:548] Defaulting external address to interface address (IP)
W0909 14:27:25.427976       1 client_config.go:517] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I0909 14:27:25.441353       1 kube.go:119] Waiting 10m0s for node controller to sync
I0909 14:27:25.441403       1 kube.go:306] Starting kube subnet manager
I0909 14:27:26.441870       1 kube.go:126] Node controller sync successful
I0909 14:27:26.441931       1 main.go:246] Created subnet manager: Kubernetes Subnet Manager - nodeName
I0909 14:27:26.441937       1 main.go:249] Installing signal handlers
I0909 14:27:26.442019       1 main.go:390] Found network config - Backend type: vxlan
I0909 14:27:26.442096       1 vxlan.go:121] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false
E0909 14:27:26.442424       1 main.go:291] Error registering network: failed to acquire lease: node "node-name" pod cidr not assigned
I0909 14:27:26.443295       1 main.go:370] Stopping shutdownHandler...

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 24 (1 by maintainers)

Most upvoted comments

Running a kubectl patch seems to do the job:

kubectl patch node $(hostname) -p '{"spec":{"podCIDR":"10.100.0.1/24"}}'
kubectl delete -f pod-network-flannel.yml
kubectl apply -f pod-network-flannel.yml

But why do I have to manually patch my node for flannel to work? Isn’t it supposed to automagically get the pod subnet from the control plane?

@kele1997 Thank you for the quick reply. I tried what you said, now I get:

can not mix '--config' with arguments [pod-network-cidr]
To see the stack trace of this error execute with --v=5 or higher

kubectl logs pod/kube-flannel-ds-x5qgg -n kube-system I1204 11:18:34.968447 1 main.go:217] CLI flags config: {etcdEndpoints:http://127.0.0.1:4001,http://127.0.0.1:2379 etcdPrefix:/coreos.com/network etcdKeyfile: etcdCertfile: etcdCAFile: etcdUsername: etcdPassword: help:false version:false autoDetectIPv4:false autoDetectIPv6:false kubeSubnetMgr:true kubeApiUrl: kubeAnnotationPrefix:flannel.alpha.coreos.com kubeConfigFile: iface:[] ifaceRegex:[] ipMasq:true subnetFile:/run/flannel/subnet.env subnetDir: publicIP: publicIPv6: subnetLeaseRenewMargin:60 healthzIP:0.0.0.0 healthzPort:0 charonExecutablePath: charonViciUri: iptablesResyncSeconds:5 iptablesForwardRules:true netConfPath:/etc/kube-flannel/net-conf.json setNodeNetworkUnavailable:true} W1204 11:18:34.968505 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. I1204 11:18:35.156069 1 kube.go:120] Waiting 10m0s for node controller to sync I1204 11:18:35.156105 1 kube.go:378] Starting kube subnet manager I1204 11:18:36.158592 1 kube.go:127] Node controller sync successful I1204 11:18:36.158622 1 main.go:237] Created subnet manager: Kubernetes Subnet Manager - master02.fe.me I1204 11:18:36.158628 1 main.go:240] Installing signal handlers I1204 11:18:36.158733 1 main.go:459] Found network config - Backend type: vxlan I1204 11:18:36.158794 1 main.go:651] Determining IP address of default interface I1204 11:18:36.159132 1 main.go:698] Using interface with name enp0s3 and address 192.168.1.86 I1204 11:18:36.159172 1 main.go:720] Defaulting external address to interface address (192.168.1.86) I1204 11:18:36.159177 1 main.go:733] Defaulting external v6 address to interface address (<nil>) I1204 11:18:36.159246 1 vxlan.go:137] VXLAN config: VNI=1 Port=0 GBP=false Learning=false DirectRouting=false E1204 11:18:36.159477 1 main.go:325] Error registering network: failed to acquire lease: node “master02.fe.me” pod cidr not assigned W1204 11:18:36.159624 1 reflector.go:424] github.com/flannel-io/flannel/subnet/kube/kube.go:379: watch of *v1.Node ended with: an error on the server (“unable to decode an event from the watch stream: context canceled”) has prevented the request from succeeding I1204 11:18:36.159633 1 main.go:439] Stopping shutdownHandler…

I’ve been hit by the same issue on a 1.19.4 cluster. looks like for some reasons, kube-controller-manager doesn’t update the podCIDR of all the nodes (in my 3+3 setup, only 5 out of 6 get a podCIDR), so flannel fails to start on the nodes where it’s missing: failed to acquire lease: node "node-name" pod cidr not assigned

my kubeadm-config configMap looks like this:

    networking:
      dnsDomain: cluster.local
      podSubnet: 10.46.128.0/21
      serviceSubnet: 192.168.1.0/24

and flannel’s one like this:

apiVersion: v1
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.46.128.0/21",
      "Backend": {
        "Type": "vxlan",
        "VNI": 1,
        "Port": 8472
      }
    }
kind: ConfigMap

It does work on a 1.18 cluster though (all nodes got podCIDR field), with the same version of flannel, so I’m not sure it’s a flannel issue