kind: IPv6 Cluster not resolving kind-control-plane

What happened: I created an IPv6 Single stack cluster and tried resolving kind-control-plane from inside the cluster which does not resolve to anything. In an IPv4 Cluster, this works as expected.

What you expected to happen: The DNS record gets resolved as in IPv4 cluster.

How to reproduce it (as minimally and precisely as possible): kind create cluster --config ipv6.yaml

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
- role: control-plane
  image: kindest/node:v1.26.0
networking:
  ipFamily: ipv6
  podSubnet: fd00:10:1::/56
  serviceSubnet: fd00:10:2::/112

kubectl create deploy multitool --image=ghcr.io/dergeberl/multitool-net

Open a shell in the container: dig +short kind-control-plane dig +short kind-control-plane aaaa Neither does resolve anything.

Doing the same with the following config:

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
- role: control-plane
  image: kindest/node:v1.26.0
networking:
  ipFamily: ipv4
  podSubnet: 10.1.0.0/16
  serviceSubnet: 10.2.0.0/16

resolves this output:

# dig +short kind-control-plane
172.18.0.2
# dig +short kind-control-plane aaaa
fc00:f853:ccd:e793::2

Anything else we need to know?: IPv6 in general is working, also resolving external domains or kubernetes internal dns records do work.

Environment:

  • kind version: (use kind version): 0.14.0
  • Runtime info: (use docker info or podman info):
Client:
 Context:    default
 Debug Mode: false

Server:
 Containers: 1
  Running: 1
  Paused: 0
  Stopped: 0
 Images: 2
 Server Version: 20.10.12
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version:
 runc version:
 init version:
 Security Options:
  apparmor
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.15.0-1029-gcp
 Operating System: Ubuntu 22.04.2 LTS
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 31.35GiB
 Name: felix-gardener-dev
 ID: 4C5N:TWVA:HRYV:WKBA:WAZS:SHAG:OEZD:SECJ:CKDR:SAE6:C3KU:FLMC
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
  • OS (e.g. from /etc/os-release):
PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy
  • Kubernetes version: (use kubectl version):
Client Version: v1.24.3
Kustomize Version: v4.5.4
Server Version: v1.26.0

/cc @einfachnuralex

About this issue

  • Original URL
  • State: open
  • Created a year ago
  • Comments: 20 (12 by maintainers)

Most upvoted comments

we always get a connection refused. It seems like the docker daemon only serves DNS on IPv4.

yeah, thanks for the detailed report, I recognize I was too lazy to go over all the details, but indeed if CoreDNS only has IPv6 and docker embedded DNS only has IPv4 it is impossible to communicate unless (from the top of my head) we do some NAT64, embedded DNS server on IPv6 or CoreDNS pod has dual-stack IPs.

Found this https://github.com/moby/moby/issues/41651 related to the embedded DNS missing IPv6

In our case, we stopped relying on kind-control-plane and instead inject another hostname into /etc/hosts on all kind nodes and into a coredns hosts configuration. After creating the kind cluster, we look up the IP of the control plane node in the docker network with docker container inspect and configure our new hostname to resolve to this IP. This workaround is good enough for us and is also portable: can be used in a second kind cluster in the same network, can be used for other cluster setups, and can additionally be configured on the host machine.

With such an approach, kind could also “manually” inject hostname/IP pairs of all kind nodes into the docker containers and coredns configuration. This would allow resolving at least the hostnames of nodes in the same cluster, not of other docker containers in the same network though. That being said, we can live with our workaround. Given that switching to dual-stack kind also resolves the issue, I’m skeptical that a workaround in kind for https://github.com/moby/moby/issues/41651 would be worth the effort 😃

If coreDNS only has IPv6 and not IPv4, I fail to see how having a different source stack IPv4/IPv6 is relevant. the above comment alludes to coreDNS only having IPv6 and the underlying docker not having the correct capabilities. However, I think it may be prudent to have debug/command output to confirm as there may still be routing/nat issues regardless of the source. Please advise, as I’m unaware of the underlying networking stack of kind.

No, we’re talking about dockerd’s embedded listener in https://github.com/kubernetes-sigs/kind/issues/3114#issuecomment-1475966903 which is the upstream of coreDNS, not coreDNS. https://github.com/kubernetes-sigs/kind/issues/3114#issuecomment-1475966903 mentions this known issue with docker upstream WRT not having an IPv6 listenere, and we have docker’s DNS for the upstream resolver as far as Kubernetes + coreDNS is concerned here.

I’m sorry, I can’t dig in further at the moment myself, I’m keeping a close eye on the first round of rollouts for https://kubernetes.io/blog/2023/03/10/image-registry-redirect/

https://github.com/kubernetes-sigs/kind/issues/3114#issuecomment-1482339004 is not a solution for me because I’m not only after kind-control-plane but after generally working DNS for IPv6.

We patched the coredns like the e2e-k8s.sh script. However, we are still unable to resolve the kind’s docker container name. We also tried adding the docker network gateway’s IPv6 address to /etc/resolv.conf instead of the IPv4 address (see comment above). With this, we always get a connection refused. It seems like the docker daemon only serves DNS on IPv4.

Conclusion: resolving the kind’s docker container name from within the pod network only works with IPv4 or dual-stack kind clusters but not in IPv6 clusters.

fun 😬 , cc @aojea