cilium: Pods fail to reply to outside IPv6 requests
Bug report
Hosts on the same IPv6 subnet as kubernetes Nodes (and other Pods within the kubernetes cluster) are able to communicate with Pods with no problem. Hosts outside that subnet fail. True for both Pods and Services on IPv6. IPv4 works fine.
Tracing reveals the request packets arrive to the application Pod, and the application responds. However, the reply path fails with Cilium dropping the packet:
xx drop (Stale or unroutable IP) flow 0x2bc1756b to endpoint 0, identity 57868->0: 2605:xxxx:xxxx::e8c0 -> 2601:xxxx:xxxx::7208 EchoReply
Debugging with @aanm reveals that:
- The
cilium_host
interface on the host Node shares the IP address of the host Node - The application Pod’s default route has an IP address on the Pod subnet which does not exist on any host interface
- Manually changing the IP address of the
cilium_host
to be the otherwise non-existent gateway address on the Pod subnet does not resolve the issue. The Node is not marked up by kubernetes with its IPv6 address, only its IPv4 address (however, Cilium does accurately discover the IPv6 address)
Note that traffic originating from the application Pod is able to communicate bidirectionally with outside IPv6 hosts.
Note also that IPv4 traffic works fine in any direction.
General Information
- Cilium version: v1.7.2 and build from #11240
- Kernel version: v5.5.15-talos
- Orchestration system version in use: Kubernetes v1.18.0
- Link to relevant artifacts:
- default policies, deployment script, etc.
- very slightly modified Cilium Config (policies, deployments scripts, …)
$ diff quick-install.yaml quick-install-ref.yaml 45c45 < enable-ipv6: "true" --- > enable-ipv6: "false" 139,140c139,140 < enable-host-reachable-services: "true" < enable-external-ips: "true" --- > enable-host-reachable-services: "false" > enable-external-ips: "false"
- application Pod is just a default ubuntu container:
image: ubuntu
- Upload a system dump: system dump acquired and available via DM
How to reproduce the issue
- Install default dual-stack cluster with custom IPv4 and IPv6 Pod and Service subnets
- Create a test Pod
I’m certain there are more significant pieces to reproduce the issue, but this is all I am actually doing.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 3
- Comments: 15 (3 by maintainers)
We no longer supports masqed IPv6, so it is no longer relevant for me.
No, I’ve moved on to more recent versions now, and I don’t think this is still an issue in recent versions.