external-dns: external-dns doesn't find the service

Hey,

I’m trying to get external-dns work with infoblox provider. tried the exmple from the tutorial, but external-dns doesn’t create records.

this is how my service looks like:

$ kubectl -n ingress describe service nginx-ingress-service
Name:			nginx-ingress-service
Namespace:		ingress
Labels:			k8s-svc=nginx-ingress-service
Annotations:		external-dns.alpha.kubernetes.io/hostname=prod.k8s.vcdcc.example.info
Selector:		pod=nginx-ingress-lb
Type:			LoadBalancer
IP:			10.233.12.109
Port:			http	80/TCP
NodePort:		http	31742/TCP
Endpoints:		10.68.69.75:80,10.68.74.204:80,10.68.76.75:80 + 2 more...
Port:			https	443/TCP
NodePort:		https	32204/TCP
Endpoints:		10.68.69.75:443,10.68.74.204:443,10.68.76.75:443 + 2 more...
Session Affinity:	None
Events:			<none>

when I run the external-dns with the following flags ( + environment variables ):

$docker run registry.opensource.zalan.do/teapot/external-dns --kubeconfig="/root/config" --source=service --domain-filter=prod.k8s.vcdcc.example.info --provider=infoblox --txt-owner-id=ext-dns-k8s-prod --log-level=debug

I only get the following output, nothing in infoblox.

INFO[0000] config: &{Master: KubeConfig:/root/config Sources:[service] Namespace: AnnotationFilter: FQDNTemplate: Compatibility: PublishInternal:false Provider:infoblox GoogleProject: DomainFilter:[prod.k8s.vcdcc.travian.info] AWSZoneType: AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup: CloudflareProxied:false InfobloxGridHost:infoblox.example.info InfobloxWapiPort:443 InfobloxWapiUsername:<user> InfobloxWapiPassword:<pwd> InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InMemoryZones:[] Policy:sync Registry:txt TXTOwnerID:ext-dns-k8s-prod TXTPrefix: Interval:1m0s Once:false DryRun:false LogFormat:text MetricsAddress::7979 LogLevel:debug} 
INFO[0000] Connected to cluster at https://master01.prod.k8s.vcdcc.example.info:6443 
DEBU[0002] No endpoints could be generated from service default/kubernetes 
DEBU[0002] No endpoints could be generated from service ingress/default-http-backend 
DEBU[0002] No endpoints could be generated from service ingress/nginx-ingress-service 
DEBU[0002] No endpoints could be generated from service kube-system/heapster 
DEBU[0002] No endpoints could be generated from service kube-system/kube-controller-manager-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns 
DEBU[0002] No endpoints could be generated from service kube-system/kube-dns-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kube-scheduler-prometheus-discovery 
DEBU[0002] No endpoints could be generated from service kube-system/kubelet 
DEBU[0002] No endpoints could be generated from service kube-system/kubernetes-dashboard 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-grafana 
DEBU[0002] No endpoints could be generated from service kube-system/monitoring-influxdb 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-main 
DEBU[0002] No endpoints could be generated from service monitoring/alertmanager-operated 
DEBU[0002] No endpoints could be generated from service monitoring/grafana 
DEBU[0002] No endpoints could be generated from service monitoring/kube-state-metrics 
DEBU[0002] No endpoints could be generated from service monitoring/node-exporter 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-k8s 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operated 
DEBU[0002] No endpoints could be generated from service monitoring/prometheus-operator 

Thanks, Robert

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 23 (15 by maintainers)

Most upvoted comments

@nrobert13 ok, that’s why it’s not working: External DNS will just create DNS records to point to the service “External IP / Load Balancer” and in your case it’s empty. You can either:

Example of a properly filled status field (on AWS):

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  creationTimestamp: 2017-01-18T16:03:17Z
  generation: 2
  name: myapp
  namespace: default
  resourceVersion: "41632270"
  selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/myapp
  uid: a003f57b-dd97-1234-8ee7-06af11f8e77b
spec:
  rules:
  - host: myapp.example.org
    http:
      paths:
      - backend:
          serviceName: myapp
          servicePort: 80
status:
  loadBalancer:
    ingress:
    - hostname: aws-1234-lb-123znwf3n9dgs-1728323123.eu-central-1.elb.amazonaws.com

I set up a cluster on bare metal machines, but it doesn’ work. External IP still pending, and DNS Record changes nothing… Need help! 😭