kubernetes: Defining a custom loadBalancerSourceRanges in a AWS NLB service is not respected

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:

Creating a NLB service with a custom loadBalancerSourceRanges is not respected

What you expected to happen:

The security group should be created with the defined range and not 0.0.0.0/0

About this issue

  • Original URL
  • State: closed
  • Created 7 years ago
  • Comments: 24 (15 by maintainers)

Most upvoted comments

@jrnt30 - Your Jan 30 post in this issue should win an award.

You are right, NLBs do not have Security Groups. The current NLB controller opens up the nodePort on the nodes’ security group to 0.0.0.0/0 in order to allow traffic. When a user defines loadBalancerSourceRanges, it should respect the IP CIDRs they specify.

Classic ELBs do have security groups, that is not covered in this issue.

tl;dr - The current controller doesn’t seem to reconcile updates to the loadBalancerSourceRanges for existing target groups properly however it does seem to do the right thing if there is a new target group created in conjunction with a change.

Initial Deployment

Our initial deployment did not include any restrictions via loadBalancerSourceRanges.

kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
  name: nginx-ingress-controller
  namespace: kube-system
spec:
  externalTrafficPolicy: Local
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app: nginx-ingress
    component: controller
    release: nginx-ingress
  sessionAffinity: None
  type: LoadBalancer

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePorts for http and https created and unrestricted (0.0.0.0/0)

Update - Add Net New loadBalancerSourceRanges

We had a desire then to lock down our ingress and added in the loadBalancerSourceRanges and simply added the loadBalancerSourceRanges to the spec. This did not have any changes on the Security Group for the nodes at all.

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePort for http and https created and unrestricted (0.0.0.0/0)

Forced Update - Adjust ports by removing nodePort

We tried “tricking” the Controller into doing additional work by removing the nodePort attribute. This does result in a meaningful change being propagated to the Security Group however it also means that a new Target Group is created which requires the heath checks to initialize and results in traffic being dropped until that finishes.

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • New nodePort for modified has explicit ingress rules for IPs listed in loadBalancerSourceRanges
  • Original nodePorts * still have* unrestricted 0.0.0.0/0 rule present

Update - Append new IP to loadBalancerSourceRanges

Adding in another IP to the list did not result in any changes

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePort has ingress for only the initial IP that was defined in the previous step.
  • New IP was not added to the Security Group ingress rules

Update - Remove all IPs

Finally we tried just deleting the loadBalancerSourceRanges all together. This resulted in no changes to the rules.

Resultant Security Group Ingress Rules

  • healthCheckNodePort created and restricted to VPC CIDR
  • nodePort has ingress for only the initial IP that was defined in the previous step.
  • No rule created for 0.0.0.0/0

Logs

There are some log statements here that also indicate some issues. Specifically it seems the assessment of Additions/Removals is incorrect when adjustments to the loadBalancerSourceRanges occurs.

Initial Creation

These seemed fine.

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:26:48.251487       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15560409", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.213824       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.214234       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 32063} {https TCP 443 {1 0 https} 31994}], map[service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.215605       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15561167", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer

Update with new IP logs

No updates to the SG ID to remove the 0.0.0.0/0 rules or the introduction of our new rule.

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.215633       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15561167", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [] -> [A.A.A.A/32]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.463376       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.463417       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:44.463534       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:45.965417       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:31:45.965631       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15561167", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Removal of node port - Force of new ingress

Looked pretty good

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.502166       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.502241       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 31911} {https TCP 443 {1 0 https} 31503}], map[service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.503075       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15562183", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.724077       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.724113       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:36.724123       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514453       1 aws_loadbalancer.go:650] Adding rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514496       1 aws_loadbalancer.go:651] Adding rule for client traffic from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514509       1 aws_loadbalancer.go:650] Adding rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514517       1 aws_loadbalancer.go:651] Adding rule for client traffic from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514529       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514539       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514549       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514610       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514623       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.514632       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.577979       1 aws.go:2791] Existing security group ingress: sg-026f781d751fc164b [ {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 32063,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 32063
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "-1",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   UserIdGroupPairs: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       GroupId: "sg-026f781d751fc164b",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       UserId: "182258455885"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       GroupId: "sg-055ab035be8ef7fe4",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       UserId: "182258455885"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 22,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "A.A.A.A/32"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "B.B.B.B/32"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "10.0.0.0/16"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     },
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "C.C.C.C/32"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 22
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31994,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31994
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 30645,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "10.0.0.0/16",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/health=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 30645
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } ]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.578273       1 aws.go:2819] Adding security group ingress: sg-026f781d751fc164b [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31911,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "A.A.A.A/32",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31911
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31503,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "A.A.A.A/32",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31503
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager }]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager W0110 15:38:38.838869       1 aws_loadbalancer.go:725] Revoking ingress was not needed; concurrent change? groupId=sg-026f781d751fc164b
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.861796       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:38:38.862571       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15562183", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Appending of new IP

Looks a bit strange, logs indicate it’s going to remove the existing IP (which should remain since we just appended another item) and it they should add the new rule as well.

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.740474       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.740542       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 31911} {https TCP 443 {1 0 https} 31503}], map[service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.740978       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563248", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [A.A.A.A/32] -> [A.A.A.A/32 D.D.D.D/32]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.741006       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563248", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.987059       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.987106       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:48.987118       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240474       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240516       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240527       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240632       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240645       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.240655       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32 D.D.D.D/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager W0110 15:45:50.302613       1 aws_loadbalancer.go:725] Revoking ingress was not needed; concurrent change? groupId=sg-026f781d751fc164b
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.324063       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:45:50.324168       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563248", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Removing of IP (wasn’t propagated to SG ID in the first place)

This seems to indicate it would be deleting the incorrect IP however nothing actually occurred because nothing was ever created for IP #2

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.096429       1 service_controller.go:300] Ensuring LB for service kube-system/nlb-test
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.096513       1 aws.go:3247] EnsureLoadBalancer(kops.dev.nbox.site, kube-system, nlb-test, us-east-1, , [{http TCP 80 {1 0 http} 31911} {https TCP 443 {1 0 https} 31503}], map[service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout:60 service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled:true service.beta.kubernetes.io/aws-load-balancer-type:nlb])
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.097251       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563772", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [A.A.A.A/32 D.D.D.D/32] -> [A.A.A.A/32]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.097395       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563772", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.353097       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.353137       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:21.353147       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587503       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587540       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587553       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587566       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([A.A.A.A/32]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587630       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.587643       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([A.A.A.A/32]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager W0110 15:49:22.659825       1 aws_loadbalancer.go:725] Revoking ingress was not needed; concurrent change? groupId=sg-026f781d751fc164b
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.703219       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:49:22.703326       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15563772", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer

Removal of ALL IPs from loadBalancerSourceRanges

This indicates a REMOVAL of 0.0.0.0/0 instead of a removal of the existing IPs and the CREATION of a ingress on 0.0.0.0/0

kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:14.947751       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15564496", FieldPath:""}): type: 'Normal' reason: 'LoadBalancerSourceRanges' [A.A.A.A/32] -> []
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:14.948198       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15564496", FieldPath:""}): type: 'Normal' reason: 'EnsuringLoadBalancer' Ensuring load balancer
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:15.188303       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0a52a2b1038a587f0"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:15.188344       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-064ca6cd08abcce6f"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:15.188513       1 aws.go:3055] Ignoring private subnet for public ELB "subnet-0683d368ce47b5dd6"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250192       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([0.0.0.0/0]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250232       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250244       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250256       1 aws_loadbalancer.go:657] Removing rule for client MTU discovery from the network load balancer ([0.0.0.0/0]) to instances (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250333       1 aws_loadbalancer.go:658] Removing rule for client traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.250346       1 aws_loadbalancer.go:660] Removing rule for health check traffic from the network load balancer ([0.0.0.0/0]) to instance (sg-026f781d751fc164b)
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.318052       1 aws.go:2879] Removing security group ingress: sg-026f781d751fc164b [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 31994,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 31994
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager } {
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   FromPort: 32063,
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpProtocol: "tcp",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   IpRanges: [{
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       CidrIp: "0.0.0.0/0",
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager       Description: "kubernetes.io/rule/nlb/client=a206d0b7914ec11e9a42b0ee286468d7"
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager     }],
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager   ToPort: 32063
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager }]
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.592089       1 service_controller.go:326] Not persisting unchanged LoadBalancerStatus for service kube-system/nlb-test to registry.
kube-controller-manager-ip-10-0-20-77.ec2.internal kube-controller-manager I0110 15:54:16.592838       1 event.go:218] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"nlb-test", UID:"206d0b79-14ec-11e9-a42b-0ee286468d76", APIVersion:"v1", ResourceVersion:"15564496", FieldPath:""}): type: 'Normal' reason: 'EnsuredLoadBalancer' Ensured load balancer