kubernetes: AWS not updating security group for loadBalancerSourceRanges

Updating “loadBalancerSourceRanges” in the kubernetes service definition doesnt update the AWS security group

AWS security group must be in sync when we update the service definitions.

Create service.yaml file kind: Service apiVersion: v1 metadata: name: proxy-service-external namespace: test labels: app: test component: proxy spec: selector: app: test component: proxy ports:

  • port: 9200 targetPort: 9200 type: LoadBalancer loadBalancerSourceRanges:
  • a.b.c.d/32

Apply the service.yaml - kubectl apply -f service.yaml (Able to see a.b.c.d/32 in the ELB security groups)

Update the file and add another IP as below loadBalancerSourceRanges:

  • a.b.c.d/32
  • e.f.g.h/32 Applying this file again doesnt update the security group.

Environment:

  • Kubernetes version (use kubectl version): 1.13
  • Cloud provider or hardware configuration: AWS
  • OS (e.g: cat /etc/os-release): CoreOS

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Reactions: 15
  • Comments: 29 (15 by maintainers)

Most upvoted comments

I think this issue needs to be re-opened, issue still exists with EKS v1.25, loadBalancerSourceRanges works when creating the NLB, but updating still does not work. Can we have this re-opened please

We are kind of facing the issue other way around, i.e. kubernetes automatically add rules to allow traffice from 0.0.0.0 on port 80 and 443 in the provided security group. I think when security group is provided it shouldn’t modify the rules inside. Seems like a bug.

@arangamani I wasnt able to replicate this in the following version branches either.

Version: v1.17 Version: v1.16 Version: v1.15 Version: v1.14

Workaround, that doesn’t involve deleting existing NLB.

spec.loadBalancerSourceRanges updates Security Group Inbound Rules on Worker Nodes. Locate entries created by NLB Service (with comments kubernetes.io/rule/nlb/client=<your_nlb_name>).

MAKE BACKUPS of said entries and delete them. So that you can restore them in case step below fails. I didn’t delete kubernetes.io/rule/nlb/health or kubernetes.io/rule/nlb/mtu rules, just kubernetes.io/rule/nlb/client ones.

Update your NLB Service and confirm that new entries appear in Security Group of Worker Nodes.

This approach worked for me. Hope it saves someone from having to delete NLB.