kops: Altering autscale group in AWS makes dns faulty

Thanks for submitting an issue! Please fill in as much of the template below as you can.

------------- BUG REPORT TEMPLATE --------------------

  1. What kops version are you running? The command kops version, will display this information.

kops version 1.7.1

  1. What Kubernetes version are you running? kubectl version will print the version if a cluster is running or provide the Kubernetes version specified as a kops flag.

  2. What cloud provider are you using? AWS

  3. What commands did you run? What is the simplest way to reproduce this issue? I changed the launch configuration in the autoscaling group for my nodes (workers) and terminated the old instances resulting in the new instances being spawned. (I upgraded the instance type)

  4. What happened after the commands executed? The nodes autojoined the cluster but there are intermittent dns issues when attempting to resolve dns names from my some of my pods.

I checked the dns controller in the kube-system and here is what i noticed: (have replaced my dns zone name with cluster.dns bellow)

Found multiple zones for name “cluster.dns”, won’t manage zone (To fix: provide zone mapping flag with ID of zone) dnscontroller.go:611] Update desired state: node/ip-X-X-X-X.us-west-2.compute.internal: [{A node/ip-X-X-X-X.us-west-2.compute.internal/internal X.X.X.X true} {A node/role=node/internal 10.1.54.136 true} {A node/role=node/ ip-X-X-X-X.us-west-2.compute.internal true} {A node/role=node/ ip-X-X-X-X.us-west-2.compute.internal true}]

I also noticed this error in one of the kube-dns sidecar pods

ERROR: logging before flag.Parse: W1212 23:30:15.361488 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58316->127.0.0.1:53: read: connection refused

I am using kube-router as my cni and kube-dns autoscaler also throws some errors:

E1213 01:33:16.911664 1 autoscaler_server.go:86] Error while getting cluster status: Get https://100.64.0.1:443/api/v1/nodes: dial tcp 100.64.0.1:443: i/o timeout

  1. What did you expect to happen? DNS should be fine. I am guessing I am ignorant to something that manages dns in my cluster

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 16 (2 by maintainers)

Most upvoted comments

You need kops 1.8. That has the fix. Ami does not matter, personally I would do both. You can do a rolling update. But I am not certain that is your problem. Test 😉