external-dns: external-dns failing on startup with: level=fatal msg="failed to sync cache: timed out waiting for the condition"

I’m trying to make this example work external-dns with an azure private dns. I continue to get a fatal message on startup: level=fatal msg=“failed to sync cache: timed out waiting for the condition”

I have checked other solutions with this condition and the FAQ to see if I can resolve. The FAQ indicated this would be an issue with the permissions from the namespace. I’m running everything in the default namespace and I’ve checked that my service principle has the proper rbac for the dns zone and that the credentials for my service principle are in the azure.json file being mounted to /etc/kubernetes.

This is the yaml I’m currently using with kubernetes v1.20.13-

apiVersion: v1
kind: ServiceAccount
metadata:
  name: externaldns-sp

apiVersion: rbac.authorization.k8s.io/v1

kind: ClusterRole
metadata:
  name: externaldns-role
rules:
- apiGroups: [""]
  resources: ["services","endpoints","pods"]
  verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
  resources: ["ingresses"] 
  verbs: ["get","watch","list"]
- apiGroups: [""]
  resources: ["nodes"]
  verbs: ["list"]
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: externaldns-rolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: externaldns-role
subjects:
- kind: ServiceAccount
  name: externaldns-sp
  namespace: default

apiVersion: apps/v1
kind: Deployment
metadata:
  name: external-dns
spec:
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: external-dns
  template:
    metadata:
      labels:
        app: external-dns
    spec:
      serviceAccountName: externaldns-sp
      containers:
      - name: external-dns
        image: k8s.gcr.io/external-dns/external-dns:v0.8.0
        args:
        - --source=service
        - --source=ingress
        - --domain-filter=xxxxxxxxx.privatelink.eastus2.azmk8s.io
        - --provider=azure-private-dns
        - --azure-resource-group=xxxxxxxx
        - --txt-prefix=externaldns-
        - --log-level=debug
      volumeMounts:
        - name: azure-config-file
          mountPath: "/etc/kubernetes"
          readOnly: true
      volumes:
      - name: azure-config-file
        secret:
          secretName: azure-config-file

The logs that I’m getting from the startup:

time="2022-02-24T00:32:58Z" level=info msg="config: {APIServerURL: KubeConfig: RequestTimeout:30s ContourLoadBalancerService:heptio-contour/contour GlooNamespace:gloo-system SkipperRouteGroupVersion:zalando.org/v1 Sources:[service ingress] Namespace: AnnotationFilter: LabelFilter: FQDNTemplate: CombineFQDNAndAnnotation:false IgnoreHostnameAnnotation:false IgnoreIngressTLSSpec:false Compatibility: PublishInternal:false PublishHostIP:false AlwaysPublishNotReadyAddresses:false ConnectorSourceServer:localhost:8080 Provider:azure-private-dns GoogleProject: GoogleBatchChangeSize:1000 GoogleBatchChangeInterval:1s DomainFilter:[xxxxxxxxxx.privatelink.eastus2.azmk8s.io] ExcludeDomains:[] RegexDomainFilter: RegexDomainExclusion: ZoneNameFilter:[] ZoneIDFilter:[] AlibabaCloudConfigFile:/etc/kubernetes/alibaba-cloud.json AlibabaCloudZoneType: AWSZoneType: AWSZoneTagFilter:[] AWSAssumeRole: AWSBatchChangeSize:1000 AWSBatchChangeInterval:1s AWSEvaluateTargetHealth:true AWSAPIRetries:3 AWSPreferCNAME:false AWSZoneCacheDuration:0s AzureConfigFile:/etc/kubernetes/azure.json AzureResourceGroup:xxxxxxxxxxxxxxxxx AzureSubscriptionID: AzureUserAssignedIdentityClientID: BluecatConfigFile:/etc/kubernetes/bluecat.json CloudflareProxied:false CloudflareZonesPerPage:50 CoreDNSPrefix:/skydns/ RcodezeroTXTEncrypt:false AkamaiServiceConsumerDomain: AkamaiClientToken: AkamaiClientSecret: AkamaiAccessToken: AkamaiEdgercPath: AkamaiEdgercSection: InfobloxGridHost: InfobloxWapiPort:443 InfobloxWapiUsername:admin InfobloxWapiPassword: InfobloxWapiVersion:2.3.1 InfobloxSSLVerify:true InfobloxView: InfobloxMaxResults:0 DynCustomerName: DynUsername: DynPassword: DynMinTTLSeconds:0 OCIConfigFile:/etc/kubernetes/oci.yaml InMemoryZones:[] OVHEndpoint:ovh-eu OVHApiRateLimit:20 PDNSServer:http://localhost:8081 PDNSAPIKey: PDNSTLSEnabled:false TLSCA: TLSClientCert: TLSClientCertKey: Policy:sync Registry:txt TXTOwnerID:default TXTPrefix:externaldns- TXTSuffix: Interval:1m0s MinEventSyncInterval:5s Once:false DryRun:false UpdateEvents:false LogFormat:text MetricsAddress::7979 LogLevel:debug TXTCacheInterval:0s TXTWildcardReplacement: ExoscaleEndpoint:https://api.exoscale.ch/dns ExoscaleAPIKey: ExoscaleAPISecret: CRDSourceAPIVersion:externaldns.k8s.io/v1alpha1 CRDSourceKind:DNSEndpoint ServiceTypeFilter:[] CFAPIEndpoint: CFUsername: CFPassword: RFC2136Host: RFC2136Port:0 RFC2136Zone: RFC2136Insecure:false RFC2136GSSTSIG:false RFC2136KerberosRealm: RFC2136KerberosUsername: RFC2136KerberosPassword: RFC2136TSIGKeyName: RFC2136TSIGSecret: RFC2136TSIGSecretAlg: RFC2136TAXFR:false RFC2136MinTTL:0s NS1Endpoint: NS1IgnoreSSL:false NS1MinTTLSeconds:0 TransIPAccountName: TransIPPrivateKeyFile: DigitalOceanAPIPageSize:50 ManagedDNSRecordTypes:[A CNAME] GoDaddyAPIKey: GoDaddySecretKey: GoDaddyTTL:0 GoDaddyOTE:false}"
time="2022-02-24T00:32:58Z" level=info msg="Instantiating new Kubernetes client"
time="2022-02-24T00:32:58Z" level=debug msg="apiServerURL: "
time="2022-02-24T00:32:58Z" level=debug msg="kubeConfig: "
time="2022-02-24T00:32:58Z" level=info msg="Using inCluster-config based on serviceaccount-token"
time="2022-02-24T00:32:58Z" level=info msg="Created Kubernetes client https://100.64.1.1:443"
time="2022-02-24T00:33:59Z" level=fatal msg="failed to sync cache: timed out waiting for the condition"

Does anyone know what cache is being sync’d and/or if I should be using another namespace (i.e. kube-system, calico-system)?

Is there a way I verify the required privileges of the service account from the pod?

Appreciate any guidance on what’s going sideways.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 3
  • Comments: 15 (2 by maintainers)

Most upvoted comments

I simply solved it by using >=0.10.0 version of external-dns, it says in the readme with kubernetes 1.22.

@gpsward any update? I’m also facing issues on 1.22