cilium: IPV6 access to node is lost after installing cilium with ipv6 enabled
Bug report
General Information
- Cilium version (run
cilium version) Client: 1.9.1 975b66772 2020-12-04T18:16:09+01:00 go version go1.15.5 linux/amd64 Daemon: 1.9.1 975b66772 2020-12-04T18:16:09+01:00 go version go1.15.5 linux/amd64 - Kernel version (run
uname -a) Linux srv-oajxq 5.4.0-54-generic #60-Ubuntu SMP Fri Nov 6 10:37:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux - Orchestration system version in use (e.g.
kubectl version, Mesos, …) Client Version: version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.1”, GitCommit:“c4d752765b3bbac2237bf87cf0b1c2e307844666”, GitTreeState:“clean”, BuildDate:“2020-12-18T12:09:25Z”, GoVersion:“go1.15.5”, Compiler:“gc”, Platform:“linux/amd64”} Server Version: version.Info{Major:“1”, Minor:“20”, GitVersion:“v1.20.1”, GitCommit:“c4d752765b3bbac2237bf87cf0b1c2e307844666”, GitTreeState:“clean”, BuildDate:“2020-12-18T12:00:47Z”, GoVersion:“go1.15.5”, Compiler:“gc”, Platform:“linux/amd64”}
How to reproduce the issue
cilium is installed on a k8s kubeadm installed server that obtains its IPv6 address and prefix via SLAAC on ens3
cilium is installed via Helm.
The result is that both ens3 and cilium_host end up with different IPv4 addresses but the same IPv6 address.
$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
link/ether 02:24:19:f1:d2:2a brd ff:ff:ff:ff:ff:ff
inet 10.241.210.42/30 brd 10.241.210.43 scope global dynamic ens3
valid_lft 2473sec preferred_lft 2473sec
inet6 2a02:1348:17c:748a:24:19ff:fef1:d22a/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 3481sec preferred_lft 3481sec
inet6 fe80::24:19ff:fef1:d22a/64 scope link
valid_lft forever preferred_lft forever
3: cilium_net@cilium_host: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether ea:43:1a:66:ba:0c brd ff:ff:ff:ff:ff:ff
inet6 fe80::e843:1aff:fe66:ba0c/64 scope link
valid_lft forever preferred_lft forever
4: cilium_host@cilium_net: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 82:63:9b:3f:b9:20 brd ff:ff:ff:ff:ff:ff
inet 192.168.0.159/32 scope link cilium_host
valid_lft forever preferred_lft forever
inet6 2a02:1348:17c:748a:24:19ff:fef1:d22a/128 scope global
valid_lft forever preferred_lft forever
inet6 fe80::8063:9bff:fe3f:b920/64 scope link
valid_lft forever preferred_lft forever
5: cilium_vxlan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether b2:61:4b:c1:e4:61 brd ff:ff:ff:ff:ff:ff
inet6 fe80::b061:4bff:fec1:e461/64 scope link
valid_lft forever preferred_lft forever
7: lxc_health@if6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 36:09:4a:e2:b3:1d brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet6 fe80::3409:4aff:fee2:b31d/64 scope link
valid_lft forever preferred_lft forever
11: lxc1f951ae922b5@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 42:02:d8:fc:b3:7d brd ff:ff:ff:ff:ff:ff link-netns cni-5c5b4a35-015c-b4e7-8cf0-65094143a9cf
inet6 fe80::4002:d8ff:fefc:b37d/64 scope link
valid_lft forever preferred_lft forever
15: lxc02ded1934caa@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 3e:9f:73:d2:b4:13 brd ff:ff:ff:ff:ff:ff link-netns cni-a7b32eab-3909-902f-3e02-cca756dbc5a8
inet6 fe80::3c9f:73ff:fed2:b413/64 scope link
valid_lft forever preferred_lft forever
17: lxceaca9073e91d@if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 22:e4:12:48:59:5e brd ff:ff:ff:ff:ff:ff link-netns cni-f0123261-5a17-ba81-5964-058a1fc742a8
inet6 fe80::20e4:12ff:fe48:595e/64 scope link
valid_lft forever preferred_lft forever
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 2
- Comments: 24 (10 by maintainers)
Commits related to this issue
- bpf: Remove ICMPv6 NS Responder on bpf_host This commit removes the ICMPv6 NS responder from from-netdev, to-netdev, and from-host. Let me explain why this removal won't break anything. First we ne... — committed to jschwinger233/cilium by jschwinger233 a year ago
- bpf: Remove ICMPv6 NS Responder on bpf_host This commit removes the ICMPv6 NS responder from from-netdev, to-netdev, and from-host. Let me explain why this removal won't break anything. First we ne... — committed to cilium/cilium by jschwinger233 a year ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.
Why is this being automatically closed, if the issue was confirmed by multiple people?
We probably had the same situation. Environment: Service: Vultr OS: Ubuntu 22.04 Kubernetes: RKE2 v1.25.4+rke2r1 Cilium: 1.12.4 and 1.13.0-rc2
Try every configuration, and finally found:
bandwidthManagerkubeProxyReplacementThis is the current my solution.