rke: Calico node networking errors

RKE version:

v0.2.8

Docker version: (docker version,docker info preferred)

Operating system and kernel: (cat /etc/os-release, uname -r preferred)

CentOS 7.6 Kernel 3.10.0-957.1.3.el7.x86_64 and CentOS 7.6 Kernel 3.10.0-957.27.2.el7.x86_64

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)

OpenStack

cluster.yml file:

# Nodes: this is the only required configuration. Everything else is optional.
nodes:
  # Controlplane & Etcd nodes
  - address: 10.253.10.7
    user: ansible
    role:
      - controlplane
      - etcd
    hostname_override: xxxxxxx
  - address: 10.253.10.8
    user: ansible
    role:
      - controlplane
      - etcd
    hostname_override: xxxxxxx
  - address: 10.253.10.9
    user: ansible
    role:
      - controlplane
      - etcd
    hostname_override: xxxxxxx
  # Worker nodes
  - address: 10.253.10.6
    user: ansible
    role:
      - worker
    hostname_override: xxxxxxx
  - address: 10.253.10.4
    user: ansible
    role:
      - worker
    hostname_override: xxxxxxx
  - address: 10.253.10.5
    user: ansible
    role:
      - worker
    hostname_override: xxxxxxx

# Enable use of SSH agent to use SSH private keys with passphrase
# This requires the environment `SSH_AUTH_SOCK` configured pointing to your SSH agent which has the private key added
ssh_agent_auth: true

# Set the name of the Kubernetes cluster
cluster_name: xxxxxxxxxxxx

# Check out the kubernetes version support one the rancher/rke Github page: https://github.com/rancher/rke/releases/
kubernetes_version: v1.15.3-rancher1-1

services:
  etcd:
    backup_config:
      interval_hours: 12
      retention: 6
  kube-api:
    # IP range for any services created on Kubernetes
    # This must match the service_cluster_ip_range in kube-controller
    service_cluster_ip_range: 10.21.0.0/16
    # Expose a different port range for NodePort services
    service_node_port_range: 30000-32767
    pod_security_policy: false
    extra_args:
      oidc-client-id: "spn:xxxxxxxxxx"
      oidc-issuer-url: "https://sts.windows.net/xxxxxxxxxx/"
      oidc-username-claim: "upn"
      oidc-groups-claim: "groups"
      v: 2
  kube-controller:
    # CIDR pool used to assign IP addresses to pods in the cluster
    cluster_cidr: 10.20.0.0/16
    # IP range for any services created on Kubernetes
    # This must match the service_cluster_ip_range in kube-api
    service_cluster_ip_range: 10.21.0.0/16
    extra_args:
      v: 2
  kubelet:
    # Base domain for the cluster
    cluster_domain: xxxxxxxxxxx
    # IP address for the DNS service endpoint
    cluster_dns_server: 10.21.0.10
    # Fail if swap is on
    fail_swap_on: true
    extra_args:
      v: 2

# Currently, only authentication strategy supported is x509.
# You can optionally create additional SANs (hostnames or IPs) to add to
#  the API server PKI certificate.
# This is useful if you want to use a load balancer for the control plane servers.
authentication:
  strategy: x509 # Use x509 for cluster administrator credentials and keep them very safe after you've created them
  sans:
    - "xxx.xxx.xxx.xxx"

cloud_provider:
  name: openstack
  openstackCloudProvider:
    global:
      username: xxxxxxxx
      password: xxxxxxxx
      auth-url: xxxxxxx
      tenant-id: xxxxxxx
      domain-id: default
    load_balancer:
      subnet-id: 88a8968f-2d6d-494e-a67e-dab207d068f0
    block_storage:
      bs-version: v3
      trust-device-path: false
      ignore-volume-az: false

# There are several network plug-ins that work, but we default to canal
network:
  plugin: canal

# Specify DNS provider (coredns or kube-dns)
dns:
  provider: coredns

# We disable the ingress controller deployment because we are going to run multiple ingress controllers with our own configuration
ingress:
  provider: none

# All add-on manifests MUST specify a namespace
# addons: ''
# addons_include: []

Steps to Reproduce:

Deploy an empty cluster with RKE

Results:

2019-08-29 14:26:48.610 [INFO][9] startup.go 256: Early log level set to info
2019-08-29 14:26:48.610 [INFO][9] startup.go 272: Using NODENAME environment for node name
2019-08-29 14:26:48.610 [INFO][9] startup.go 284: Determined node name: nlsvpkubec01
2019-08-29 14:26:48.614 [INFO][9] k8s.go 228: Using Calico IPAM
2019-08-29 14:26:48.614 [INFO][9] startup.go 316: Checking datastore connection
2019-08-29 14:26:48.630 [INFO][9] startup.go 340: Datastore connection verified
2019-08-29 14:26:48.630 [INFO][9] startup.go 95: Datastore is ready
2019-08-29 14:26:48.655 [INFO][9] startup.go 530: FELIX_IPV6SUPPORT is false through environment variable
2019-08-29 14:26:48.661 [INFO][9] startup.go 181: Using node name: nlsvpkubec01
2019-08-29 14:26:48.693 [INFO][18] k8s.go 228: Using Calico IPAM
CALICO_NETWORKING_BACKEND is none - no BGP daemon running
Calico node started successfully
2019-08-29 14:26:49.845 [WARNING][38] int_dataplane.go 354: Failed to query VXLAN device error=Link not found
2019-08-29 14:26:49.881 [WARNING][38] int_dataplane.go 384: Failed to cleanup preexisting XDP state error=failed to load XDP program (/tmp/felix-xdp-942558251): stat /sys/fs/bpf/calico/xdp/prefilter_v1_calico_tmp_A: no such file or directory
libbpf: failed to get EHDR from /tmp/felix-xdp-942558251
Error: failed to open object file
2019-08-29 14:27:03.250 [WARNING][38] health.go 190: Reporter failed readiness checks name="async_calc_graph" reporter-state=&health.reporterState{name:"async_calc_graph", reports:health.HealthReport{Live:true, Ready:true}, timeout:20000000000, latest:health.HealthReport{Live:true, Ready:false}, timestamp:time.Time{wall:0xbf52160db4d62435, ext:13105494327, loc:(*time.Location)(0x2b08080)}}
2019-08-29 14:28:26.819 [WARNING][38] health.go 190: Reporter failed readiness checks name="async_calc_graph" reporter-state=&health.reporterState{name:"async_calc_graph", reports:health.HealthReport{Live:true, Ready:true}, timeout:20000000000, latest:health.HealthReport{Live:true, Ready:false}, timestamp:time.Time{wall:0xbf521622a8ce8c8a, ext:96903670157, loc:(*time.Location)(0x2b08080)}}
2019-08-29 14:29:36.819 [WARNING][38] health.go 190: Reporter failed readiness checks name="async_calc_graph" reporter-state=&health.reporterState{name:"async_calc_graph", reports:health.HealthReport{Live:true, Ready:true}, timeout:20000000000, latest:health.HealthReport{Live:true, Ready:false}, timestamp:time.Time{wall:0xbf5216341ce3e9fd, ext:166703743746, loc:(*time.Location)(0x2b08080)}}
2019-08-29 14:31:06.819 [WARNING][38] health.go 190: Reporter failed readiness checks name="async_calc_graph" reporter-state=&health.reporterState{name:"async_calc_graph", reports:health.HealthReport{Live:true, Ready:true}, timeout:20000000000, latest:health.HealthReport{Live:true, Ready:false}, timestamp:time.Time{wall:0xbf52164aa8e3ca35, ext:256905062112, loc:(*time.Location)(0x2b08080)}}

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 5
  • Comments: 25 (2 by maintainers)

Most upvoted comments

Any resolution to this? I’m seeing this in one of our test clusters we just upgraded to 1.15.5 using Rancher 2.2.9

Since upgrading to Rancher v2.3.4 and Kubernetes v1.17.0-rancher1-2 I’m getting Calico errors on some of my nodes—the ones that happen to be virtual machines (Hyper-V). Bare metal ones are fine.

Pod: canal-xyzabc, container calico-node (image rancher/calico-node:v3.10.2):

[…]
2020-01-21 15:57:40.097 [WARNING][38878] int_dataplane.go 776: failed to wipe the XDP state error=failed to load BPF program (/tmp/felix-bpf-457814611): stat /sys/fs/bpf/calico/xdp/prefilter_v1_calico_tmp_A: no such file or directory 
libbpf: Error in bpf_object__probe_name():Operation not permitted(1). Couldn't load basic 'r0 = 0' BPF program. 
libbpf: failed to load object '/tmp/felix-bpf-457814611' 
Error: failed to load object file 
 try=8 
2020-01-21 15:57:40.137 [WARNING][38878] int_dataplane.go 776: failed to wipe the XDP state error=failed to load BPF program (/tmp/felix-bpf-090885526): stat /sys/fs/bpf/calico/xdp/prefilter_v1_calico_tmp_A: no such file or directory 
libbpf: Error in bpf_object__probe_name():Operation not permitted(1). Couldn't load basic 'r0 = 0' BPF program. 
libbpf: failed to load object '/tmp/felix-bpf-090885526' 
Error: failed to load object file 
 try=9 
2020-01-21 15:57:40.137 [PANIC][38878] int_dataplane.go 779: Failed to wipe the XDP state after 10 tries 
panic: (*logrus.Entry) (0x1a8e900,0xc000186140) 
 
goroutine 1 [running]: 
github.com/sirupsen/logrus.Entry.log(0xc0000d2050, 0xc0001d0f30, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7f6700000000, ...) 
	/go/pkg/mod/github.com/projectcalico/logrus@v0.0.0-20180627202928-fc9bbf2f57995271c5cd6911ede7a2ebc5ea7c6f/entry.go:112 +0x2d2 
github.com/sirupsen/logrus.(*Entry).Panic(0xc0006603c0, 0xc0005d2250, 0x1, 0x1) 
	/go/pkg/mod/github.com/projectcalico/logrus@v0.0.0-20180627202928-fc9bbf2f57995271c5cd6911ede7a2ebc5ea7c6f/entry.go:182 +0x103 
github.com/sirupsen/logrus.(*Entry).Panicf(0xc0006603c0, 0x1b11e1b, 0x2b, 0xc0005d2300, 0x1, 0x1) 
	/go/pkg/mod/github.com/projectcalico/logrus@v0.0.0-20180627202928-fc9bbf2f57995271c5cd6911ede7a2ebc5ea7c6f/entry.go:230 +0xd4 
github.com/sirupsen/logrus.(*Logger).Panicf(0xc0000d2050, 0x1b11e1b, 0x2b, 0xc0005d2300, 0x1, 0x1) 
	/go/pkg/mod/github.com/projectcalico/logrus@v0.0.0-20180627202928-fc9bbf2f57995271c5cd6911ede7a2ebc5ea7c6f/logger.go:173 +0x86 
github.com/sirupsen/logrus.Panicf(...) 
	/go/pkg/mod/github.com/projectcalico/logrus@v0.0.0-20180627202928-fc9bbf2f57995271c5cd6911ede7a2ebc5ea7c6f/exported.go:145 
github.com/projectcalico/felix/dataplane/linux.(*InternalDataplane).shutdownXDPCompletely(0xc0000f6d80) 
	/go/pkg/mod/github.com/projectcalico/felix@v0.0.0-20191003065011-e01caf688c90/dataplane/linux/int_dataplane.go:779 +0x2cd 
github.com/projectcalico/felix/dataplane/linux.(*InternalDataplane).doStaticDataplaneConfig(0xc0000f6d80) 
	/go/pkg/mod/github.com/projectcalico/felix@v0.0.0-20191003065011-e01caf688c90/dataplane/linux/int_dataplane.go:724 +0xc22 
github.com/projectcalico/felix/dataplane/linux.(*InternalDataplane).Start(0xc0000f6d80) 
	/go/pkg/mod/github.com/projectcalico/felix@v0.0.0-20191003065011-e01caf688c90/dataplane/linux/int_dataplane.go:584 +0x2f 
github.com/projectcalico/felix/dataplane.StartDataplaneDriver(0xc0005f4000, 0xc000162390, 0xc000576d20, 0x1, 0xc0005d37c0, 0x0) 
	/go/pkg/mod/github.com/projectcalico/felix@v0.0.0-20191003065011-e01caf688c90/dataplane/driver.go:186 +0xf09 
github.com/projectcalico/felix/daemon.Run(0x1ae3b51, 0x15, 0x1db21b0, 0x7, 0x1e08600, 0x28, 0x1ddf1c0, 0x18) 
	/go/pkg/mod/github.com/projectcalico/felix@v0.0.0-20191003065011-e01caf688c90/daemon/daemon.go:304 +0x18d7 
main.main() 
	/go/src/github.com/projectcalico/node/cmd/calico-node/main.go:102 +0x423 

This seem to be this issue: https://github.com/coreos/flannel/issues/1321

Adding a file /etc/systemd/network/50-flannel.link with the following content should fix the issue:

[Match]
OriginalName=flannel*
[Link]
MACAddressPolicy=none

E.g. with ignition:

    - path: /etc/systemd/network/50-flannel.link
      contents:
        inline: |
          [Match]
          OriginalName=flannel*
          [Link]
          MACAddressPolicy=none

For more context:

Hello, We hit the same error when deploying 1.15.3 with canal. We haven’t seen this error with older k8s versions and canal, neither with 1.15.3 and calico.

I think this is related with https://github.com/projectcalico/calico/issues/2191 Fixed it disabling IPv6 on the node

echo 1 > /proc/sys/net/ipv6/conf/default/disable_ipv6
echo 1 > /proc/sys/net/ipv6/conf/all/disable_ipv6

I had this issue as well. I did an empty config gen and copied over the new container versions and that seems to have resolved everything for me.

@imle Could you please provide the exact steps you took?

I had this issue as well. I did an empty config gen and copied over the new container versions and that seems to have resolved everything for me.

This problem seems to be present with Rancher 2.3.0 and 1.15.4.

@olivierlemasle Thank you! This appears to solve our issues!

On a sandbox cluster that had this problem I was able to recover by doing the following (just fishing as nothing else worked). I’m advising not to try this unless you are quite sure you can live with a failed cluster. But it worked for me.

# very losely following https://docs.projectcalico.org/getting-started/kubernetes/flannel/flannel
$ kubectl delete daemonset canal
$ kubectl delete clusterrolebinding  calico-node
$ kubectl delete clusterrolebinding  canal-calico
$ kubectl apply -f https://docs.projectcalico.org/manifests/canal.yaml
$ kubectl create clusterrolebinding canal -n kube-system --clusterrole=cluster-admin --serviceaccount=kube-system:canal

Please see https://github.com/rancher/rancher/issues/23430#issuecomment-542611269 and let me know if it resolves the issue.