cilium: Cilium unable to mount BPF fs

Nirmoy reported:

level=info msg=" ___|_| |_|_ _ _____"                                                                                                                                                                   
level=info msg="|  _| | | | | |     |"
level=info msg="|___|_|_|_|___|_|_|_|"
level=info msg="Cilium 1.0.0-rc10  go version go1.9.4 linux/amd64"
level=info msg="Envoy version check disabled"
level=info msg="clang (6.0.0) and kernel (4.12.14) versions: OK!"
level=info msg="linking environment: OK!"
level=info msg="bpf_requirements check: OK!"
level=warning msg="BPF root is not a BPF filesystem (0x62656572 != 0xcafe4a11)" file-path=/sys/fs/bpf
level=info msg="Mounted BPF filesystem /sys/fs/bpf"
level=info msg="Waiting for etcd client to be ready"
level=info msg="Valid label prefix configuration:"
level=info msg=" - :io.kubernetes.pod.namespace"
level=info msg=" - :io.cilium.k8s.namespace.labels"
level=info msg=" - !:io.kubernetes"
level=info msg=" - !:.*kubernetes.io"
level=info msg=" - !:pod-template-generation"
level=info msg=" - !:pod-template-hash"
level=info msg=" - !:controller-revision-hash"
level=info msg=" - !:annotation.cilium.io/"
level=info msg=" - !:annotation.cilium-identity"
level=info msg=" - !:annotation.sidecar.istio.io"
level=info msg="Container runtimes being used: \"docker\" on endpoint \"unix:///var/run/docker.sock\""
level=info msg="Waiting for k8s api-server to be ready..." subsys=k8s
[...]
level=info msg="  IPv6 router address: f00d::ac1c:400:0:81d6"
level=error msg="bpf: Unable to update in tunnel endpoint map" error="Unable to get object /sys/fs/bpf/tc/globals/tunnel_endpoint_map: no such file or directory" ipAddr=172.16.xxx.xxx/23
level=error msg="bpf: Unable to update in tunnel endpoint map" error="Unable to get object /sys/fs/bpf/tc/globals/tunnel_endpoint_map: no such file or directory" ipAddr="f00d::ac1c:400:0:0/112"
level=info msg="Loopback IPv4: 172.16.xxx.xxx"

Whereas worked on cilium/stable.

BPF fs was not mounted by Cilium:

caasp-master-0:~ # mount | grep bpf
caasp-master-0:~ # find /sys/fs/bpf/
/sys/fs/bpf/

Manually mounting and restarting resolved the issue (kubectl delete -f cilium.yaml then recreate).

caasp-master-0:~ # mount | grep bpf
bpf on /sys/fs/bpf type bpf (rw,relatime)

On top of: https://github.com/cilium/cilium/commit/42b330c4a14358a7100b70c82e39169ad7bcdd7f

Potentially bug in mountFS() where mountpoint could have failed for some reason where daemon instead assumed it was mounted but in reality it wasn’t. 0x62656572 is the sysfs superblock magic, so mounting BPF fs should have been fine here.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 23 (23 by maintainers)

Commits related to this issue

Most upvoted comments

although if we go that route we might as well switch unconditionally to some private /run/cilium/bpf/fs mount instance

That’s a good idea too. I think we can use names /run/cilium/bpf/fs_host and /run/cilium/bpf/fs_internal, it will make debugging easier and even looking at /proc/mounts will make it clear which kind of mount we are using.