cilium: networkPlugin cni failed to set up pod network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure range is full

Support

I followed the cilium guide from kubeadm docs:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network


However, all my pods are not running (stuck on ContainerCreating) with these error messages:

default          13m         Warning   FailedCreatePodSandBox   pod/pod-to-a-allowed-cnp-5899c44899-lmsr8                    Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "3948297ee605c722e867e3cda372cd2708c16f40e2cb0c036ac543890fee98e6" network for pod "pod-to-a-allowed-cnp-5899c44899-lmsr8": networkPlugin cni failed to set up pod "pod-to-a-allowed-cnp-5899c44899-lmsr8_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          9m7s        Warning   FailedCreatePodSandBox   pod/pod-to-a-766584ffff-zltkl                                (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "9297525a3ffd86763fb9bb914dbdab541da74c62a708e25efac6fb4f67d77508" network for pod "pod-to-a-766584ffff-zltkl": networkPlugin cni failed to set up pod "pod-to-a-766584ffff-zltkl_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          13m         Warning   FailedCreatePodSandBox   pod/pod-to-b-intra-node-7b6cbc6c56-fsl4h                     (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "558875d327cd20a5f940fb7bd2cb68c99d991dee8882a38cc268dbdda92704d8" network for pod "pod-to-b-intra-node-7b6cbc6c56-fsl4h": networkPlugin cni failed to set up pod "pod-to-b-intra-node-7b6cbc6c56-fsl4h_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          9m7s        Warning   FailedCreatePodSandBox   pod/echo-b-55d8dbd74f-2z9d6                                  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "17491a18741f39ec7dfd8811fe7da5262cc66cbe98d708ab2a144f658da2876d" network for pod "echo-b-55d8dbd74f-2z9d6": networkPlugin cni failed to set up pod "echo-b-55d8dbd74f-2z9d6_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          9m6s        Warning   FailedCreatePodSandBox   pod/pod-to-a-l3-denied-cnp-856998c977-hqcqh                  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "a9ad0cfc1626128b722910fc84f9d6f4de9506a483b43005b5abf8e6e78555de" network for pod "pod-to-a-l3-denied-cnp-856998c977-hqcqh": networkPlugin cni failed to set up pod "pod-to-a-l3-denied-cnp-856998c977-hqcqh_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          4m7s        Warning   FailedCreatePodSandBox   pod/pod-to-external-fqdn-allow-google-cnp-bb9597947-mgq8k    (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "e8b1d5715115c33a8349731348699864f3f03b18c4adad729ed379af3d746669" network for pod "pod-to-external-fqdn-allow-google-cnp-bb9597947-mgq8k": networkPlugin cni failed to set up pod "pod-to-external-fqdn-allow-google-cnp-bb9597947-mgq8k_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          9m8s        Warning   FailedCreatePodSandBox   pod/echo-a-dd67f6b4b-4plf4                                   (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "7be20a6ce51b030650275d18c5875a674fe866167f80abf484cd84adc8219d2d" network for pod "echo-a-dd67f6b4b-4plf4": networkPlugin cni failed to set up pod "echo-a-dd67f6b4b-4plf4_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
default          9m7s        Warning   FailedCreatePodSandBox   pod/pod-to-a-allowed-cnp-5899c44899-lmsr8                    (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "b07e557196aa290653220668d2a7be694a29ffea1a1ab8281cb8e0934cb2fc4d" network for pod "pod-to-a-allowed-cnp-5899c44899-lmsr8": networkPlugin cni failed to set up pod "pod-to-a-allowed-cnp-5899c44899-lmsr8_default" network: unable to allocate IP via local cilium agent: [POST /ipam][502] postIpamFailure  range is full
root@master:/# kubectl get pods
NAME                                                    READY   STATUS              RESTARTS   AGE
echo-a-dd67f6b4b-4plf4                                  0/1     ContainerCreating   0          19m
echo-b-55d8dbd74f-2z9d6                                 0/1     ContainerCreating   0          19m
host-to-b-multi-node-clusterip-686f99995d-5l96p         0/1     Pending             0          19m
host-to-b-multi-node-headless-bdbc856d-6x8pc            0/1     Pending             0          19m
pod-to-a-766584ffff-zltkl                               0/1     ContainerCreating   0          19m
pod-to-a-allowed-cnp-5899c44899-lmsr8                   0/1     ContainerCreating   0          19m
pod-to-a-external-1111-55c488465-ctpns                  0/1     ContainerCreating   0          19m
pod-to-a-l3-denied-cnp-856998c977-hqcqh                 0/1     ContainerCreating   0          19m
pod-to-b-intra-node-7b6cbc6c56-fsl4h                    0/1     ContainerCreating   0          19m
pod-to-b-multi-node-clusterip-77c8446b6d-fvrhc          0/1     Pending             0          19m
pod-to-b-multi-node-headless-854b65674d-kpppv           0/1     Pending             0          19m
pod-to-external-fqdn-allow-google-cnp-bb9597947-mgq8k   0/1     ContainerCreating   0          19m

What is wrong with my setup?

Cilium is running fine:

root@master:/# kubectl -n kube-system get pods -l k8s-app=cilium
NAME           READY   STATUS    RESTARTS   AGE
cilium-mjdlv   1/1     Running   0          70m
cilium-pdxq2   1/1     Running   0          66m
root@master:~# kubectl -n kube-system get pods --watch
NAME                                                READY   STATUS      RESTARTS   AGE
cilium-mjdlv                                        1/1     Running     1          77m
cilium-operator-6547f48966-55th5                    1/1     Running     2          77m
cilium-pdxq2                                        1/1     Running     1          73m
coredns-6955765f44-8mgr2                            0/1     Completed   0          78m
coredns-6955765f44-8thnm                            0/1     Completed   0          78m
etcd-master.cloud.company.com                      1/1     Running     1          78m
kube-apiserver-master.cloud.company.com            1/1     Running     1          78m
kube-controller-manager-master.cloud.company.com   1/1     Running     1          78m
kube-proxy-ns9r7                                    1/1     Running     1          78m
kube-proxy-q2gst                                    1/1     Running     1          73m
kube-scheduler-master.cloud.company.com            1/1     Running     1          78m
root@master:~# kubectl -n kube-system exec -ti cilium-mjdlv -- cilium-health status
Probe time:   2020-03-18T22:39:17Z
Nodes:
  master.cloud.company.com (localhost):
    Host connectivity to 10.10.2.166:
      ICMP to stack:   OK, RTT=230.886µs
      HTTP to agent:   OK, RTT=134.256µs
    Endpoint connectivity to 10.217.0.80:
      ICMP to stack:   OK, RTT=212.648µs
      HTTP to agent:   OK, RTT=197.431µs
  rivendell.cloud.company.com:
    Host connectivity to 10.10.10.169:
      ICMP to stack:   OK, RTT=1.492443ms
      HTTP to agent:   OK, RTT=19.781992ms
    Endpoint connectivity to 10.217.1.15:
      ICMP to stack:   OK, RTT=1.507577ms
      HTTP to agent:   OK, RTT=571.535µs

I was using these commands to bootstrap the Kubernetes cluster:

sudo kubeadm init --control-plane-endpoint "kubernetes.cloud.company.com" --upload-certs --pod-network-cidr=10.217.0.0/16

kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.7.1/install/kubernetes/quick-install.yaml

Host: UpCloud 2 core 4 GB Debian Buster Kubernetes Version: 1.17.4-00

Tried restarting all nodes. No success.

Using calico allows the pods to run successfully btw. But I am willing to give you guys a shot due to this very convincing article: https://mobilabsolutions.com/2019/01/why-we-switched-to-cilium/

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 18 (8 by maintainers)

Most upvoted comments

I encountered this same problem and it seems to be the route for 10.0.0.0/8 to eth1 is enough to trigger all addresses to be blacklisted. The workaround is to include the --blacklist-conflicting-routes=false option.

I think this default sucks as a very broad route (10.0.0.0/8) doesn’t really conflict with a pod network of say 10.129.0.0/16. To make matters worse, if you use the Helm chart to generate your config for Kubernetes, and you’re using the crd or etcd allocation mechanisms, then you can’t even fix this easily with a Helm config value. Others were having this problem too as both the eni and azure configs in the helm chart always disable this option.

For me at least, a fix would be to add an option for specifying additional cilium-config parameters in the helm chart, or at least one that let’s me disable this specific option. But in general I think the logic for this “feature” should be changed to ignore less specific routes than the ipam cidr (so a 10.0.0.0/8 route doesn’t blacklist a 10.129.<node>.0/24 pod cidr, but having a 10.129.<node>.32/26 route does blacklist a part of that range).