dns: Pods and headless Services don't get DNS A records without at least one service port

Observation

Per earlier discussion in #70, the documentation for kube-dns says that it will publish A records for pods so long as a headless Service named with the same subdomain exists in the same namespace. However, if the Service has no ports, kube-dns publishes no such records.

What follows is an example to demonstrate this discrepancy.

Example

We create the following objects in a given namespace:

  • A headless Service named “sub”
    Initially, the Service exposes one port with value 80 and called “nonexistent,” since the image used here doesn’t have any servers listening, to point out that it doesn’t matter whether the advertised port actually allows connecting to anything insider the container.
  • Some number of pods.
    Though a single pod is sufficient, here we create three pods selected by the aforementioned Service, each with a single container running the busybox image, each also situated within the subdomain “sub,” matching the name of the Service:
    • busybox-1
    • busybox-2
    • busybox-3
apiVersion: v1
kind: List
items:
- apiVersion: v1
  kind: Pod
  metadata: &metadata
    name: busybox-1
    labels:
      app: &app busybox
  spec: &spec
    hostname: host-1
    subdomain: &subdomain sub
    containers:
    - name: busybox
      image: busybox
      command:
      - sleep
      - "3600"
- apiVersion: v1
  kind: Pod
  metadata:
    <<: *metadata
    name: busybox-2
  spec:
    <<: *spec
    hostname: host-2
- apiVersion: v1
  kind: Pod
  metadata:
    <<: *metadata
    name: busybox-3
  spec:
    <<: *spec
    hostname: host-3
- apiVersion: v1
  kind: Service
  metadata:
    name: *subdomain
  spec:
    clusterIP: None
    selector:
      app: *app
    ports:
    # NB: Here the Service has at least one port.
    - name: nonexistent
      port: 80

Assuming that YAML document is available in a file called manifests.yaml, create these objects in some namespace:

kubectl apply -f manifests.yaml

Now run a container using an image with dig available in that same namespace, probing first for DNS A records for our subdomain “sub”:

kubectl run dig --image=tutum/dnsutils \
  --restart=Never --rm=true --tty --stdin --command -- \
  dig sub a +search +noall +answer
; <<>> DiG 9.10.2 <<>> sub a +search +noall +answer
;; global options: +cmd
sub.my-ns.svc.cluster.local. 30	IN A	172.30.48.142
sub.my-ns.svc.cluster.local. 30	IN A	172.30.48.83
sub.my-ns.svc.cluster.local. 30	IN A	172.30.98.82

Next, confirm that records exist for all three of our pod host names:

for i in $(seq 3); do \
  kubectl run dig --image=tutum/dnsutils \
    --restart=Never --rm=true --tty --stdin --command -- \
    dig "host-${i}.sub" a +search +noall +answer \
done
; <<>> DiG 9.10.2 <<>> host-1.sub a +search +noall +answer
;; global options: +cmd
host-1.sub.my-ns.svc.cluster.local. 30 IN A 172.30.98.82
; <<>> DiG 9.10.2 <<>> host-2.sub a +search +noall +answer
;; global options: +cmd
host-2.sub.my-ns.svc.cluster.local. 30 IN A 172.30.48.83
; <<>> DiG 9.10.2 <<>> host-3.sub a +search +noall +answer
;; global options: +cmd
host-3.sub.my-ns.svc.cluster.local. 30 IN A 172.30.48.142

Next, amend the Service “sub” to remove all of its service ports:

apiVersion: v1
kind: Service
metadata:
  name: *subdomain
spec:
  clusterIP: None
  selector:
    app: *app
  ports:
  # NB: Here the Service has no ports.

With that change applied, we repeat our earlier invocations of dig:

kubectl run dig --image=tutum/dnsutils \
  --restart=Never --rm=true --tty --stdin --command -- \
  dig sub a +search +noall +answer
; <<>> DiG 9.10.2 <<>> sub a +search +noall +answer
;; global options: +cmd
for i in $(seq 3); do \
  kubectl run dig --image=tutum/dnsutils \
    --restart=Never --rm=true --tty --stdin --command -- \
    dig "host-${i}.sub" a +search +noall +answer \
done
; <<>> DiG 9.10.2 <<>> host-1.sub a +search +noall +answer
;; global options: +cmd
; <<>> DiG 9.10.2 <<>> host-2.sub a +search +noall +answer
;; global options: +cmd
; <<>> DiG 9.10.2 <<>> host-3.sub a +search +noall +answer
;; global options: +cmd

Note there how there are no DNS A records available for the Service or any of the pods it selects.

Cause

Why is this so? In method (*KubeDNS.generateRecordsForHeadlessService), it iterates over the Endpoints.Subsets sequence, which is only populated for ports that exist on the related Service object. If the Service defines no ports, the Endpoints object has no subsets for any of the selected pods.

kubectl get endpoints sub --output=jsonpath='{.subsets}'
[]

So long as kube-dns is implemented this way, we need a port to notice the pods backing the Service. Hence, we should either adjust the documentation to match the implementation’s constraints, or reconsider the implementation (much harder, duplicating some of the monitoring and filtering work done by the endpoints controller).

Environment

kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T19:11:02Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7+", GitVersion:"v1.7.4-30+6c97db85c5ab05", GitCommit:"6c97db85c5ab0586c15be39b3e88c7a425b96947", GitTreeState:"clean", BuildDate:"2017-11-21T09:07:18Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 16 (8 by maintainers)

Commits related to this issue

Most upvoted comments

OK, I found the bug. It’s not pretty to fix, but it’s not terrible. Let me pull a PR together and see about tests.