external-dns: Error with updating NodePort service in GKE

What happened: Cannot create DNS entry if NodePort service is present

What you expected to happen: Create the DNS entry

How to reproduce it (as minimally and precisely as possible): I create a GKE Kubernetes cluster with the default http load balancer, it creates the service: kube-system default-http-backend NodePort ... <none> 80:30231/TCP

In the external-dns log I find:

time=“2020-04-06T13:00:53Z” level=info msg=“Change zone: zone-prd-gcp-web-rcslan-it batch #0” time=“2020-04-06T13:00:53Z” level=info msg=“Add records: default-http-backend.kube-system.wgprxxxgke01.k8s.prd.gcp.web.rcslan.it. CNAME [. ] 300” time=“2020-04-06T13:00:53Z” level=error msg=“googleapi: Error 400: Invalid value for ‘entity.change.additions[0].rrdata[1]’: ‘’, invalid”

Note the empty CNAME “[. ]” that wants to create, that is wrong

Anything else we need to know?: As a workaround I can add to the NodePort service the entry: apiVersion: v1 kind: Service metadata: annotations: external-dns.alpha.kubernetes.io/hostname: DUMMY

So it is skipped and the DNS entries for other services are created

Environment: Google GKE 1.15.9-gke.22

  • External-DNS version (use external-dns --version): 0.7.1
  • DNS provider: Google DNS
  • Others:

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 3
  • Comments: 15 (3 by maintainers)

Most upvoted comments

I’ve run into the same problem, and notices that by default external-dns uses the nodes external IP for provisionning the DNS entry for node ports. I’ve fixed the issue by adding the external-dns.alpha.kubernetes.io/access: private annotation. I think there is still a bug, the google provider shouldn’t try to add an invalid entry (and maybe should provide a better error message)