kubernetes: Defining a custom loadBalancerSourceRanges in a AWS NLB service is not respected
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Creating a NLB service with a custom loadBalancerSourceRanges is not respected
What you expected to happen:
The security group should be created with the defined range and not 0.0.0.0/0
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 24 (15 by maintainers)
@jrnt30 - Your Jan 30 post in this issue should win an award.
You are right, NLBs do not have Security Groups. The current NLB controller opens up the nodePort on the nodes’ security group to
0.0.0.0/0in order to allow traffic. When a user definesloadBalancerSourceRanges, it should respect the IP CIDRs they specify.Classic ELBs do have security groups, that is not covered in this issue.
tl;dr - The current controller doesn’t seem to reconcile updates to the
loadBalancerSourceRangesfor existing target groups properly however it does seem to do the right thing if there is a new target group created in conjunction with a change.Initial Deployment
Our initial deployment did not include any restrictions via
loadBalancerSourceRanges.Resultant Security Group Ingress Rules
healthCheckNodePortcreated and restricted to VPC CIDRnodePorts forhttpandhttpscreated and unrestricted (0.0.0.0/0)Update - Add Net New
loadBalancerSourceRangesWe had a desire then to lock down our ingress and added in the
loadBalancerSourceRangesand simply added theloadBalancerSourceRangesto thespec. This did not have any changes on the Security Group for the nodes at all.Resultant Security Group Ingress Rules
healthCheckNodePortcreated and restricted to VPC CIDRnodePortforhttpandhttpscreated and unrestricted (0.0.0.0/0)Forced Update - Adjust
portsby removingnodePortWe tried “tricking” the Controller into doing additional work by removing the
nodePortattribute. This does result in a meaningful change being propagated to the Security Group however it also means that a new Target Group is created which requires the heath checks to initialize and results in traffic being dropped until that finishes.Resultant Security Group Ingress Rules
healthCheckNodePortcreated and restricted to VPC CIDRnodePortfor modified has explicit ingress rules for IPs listed inloadBalancerSourceRangesnodePorts * still have* unrestricted0.0.0.0/0rule presentUpdate - Append new IP to
loadBalancerSourceRangesAdding in another IP to the list did not result in any changes
Resultant Security Group Ingress Rules
healthCheckNodePortcreated and restricted to VPC CIDRnodePorthas ingress for only the initial IP that was defined in the previous step.Update - Remove all IPs
Finally we tried just deleting the
loadBalancerSourceRangesall together. This resulted in no changes to the rules.Resultant Security Group Ingress Rules
healthCheckNodePortcreated and restricted to VPC CIDRnodePorthas ingress for only the initial IP that was defined in the previous step.Logs
There are some log statements here that also indicate some issues. Specifically it seems the assessment of Additions/Removals is incorrect when adjustments to the
loadBalancerSourceRangesoccurs.Initial Creation
These seemed fine.
Update with new IP logs
No updates to the SG ID to remove the
0.0.0.0/0rules or the introduction of our new rule.Removal of node port - Force of new ingress
Looked pretty good
Appending of new IP
Looks a bit strange, logs indicate it’s going to remove the existing IP (which should remain since we just appended another item) and it they should add the new rule as well.
Removing of IP (wasn’t propagated to SG ID in the first place)
This seems to indicate it would be deleting the incorrect IP however nothing actually occurred because nothing was ever created for IP #2
Removal of ALL IPs from
loadBalancerSourceRangesThis indicates a REMOVAL of
0.0.0.0/0instead of a removal of the existing IPs and the CREATION of a ingress on0.0.0.0/0