kubernetes: Defining a custom loadBalancerSourceRanges in a AWS NLB service is not respected
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
Creating a NLB service with a custom loadBalancerSourceRanges
is not respected
What you expected to happen:
The security group should be created with the defined range and not 0.0.0.0/0
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 24 (15 by maintainers)
@jrnt30 - Your Jan 30 post in this issue should win an award.
You are right, NLBs do not have Security Groups. The current NLB controller opens up the nodePort on the nodes’ security group to
0.0.0.0/0
in order to allow traffic. When a user definesloadBalancerSourceRanges
, it should respect the IP CIDRs they specify.Classic ELBs do have security groups, that is not covered in this issue.
tl;dr - The current controller doesn’t seem to reconcile updates to the
loadBalancerSourceRanges
for existing target groups properly however it does seem to do the right thing if there is a new target group created in conjunction with a change.Initial Deployment
Our initial deployment did not include any restrictions via
loadBalancerSourceRanges
.Resultant Security Group Ingress Rules
healthCheckNodePort
created and restricted to VPC CIDRnodePort
s forhttp
andhttps
created and unrestricted (0.0.0.0/0
)Update - Add Net New
loadBalancerSourceRanges
We had a desire then to lock down our ingress and added in the
loadBalancerSourceRanges
and simply added theloadBalancerSourceRanges
to thespec
. This did not have any changes on the Security Group for the nodes at all.Resultant Security Group Ingress Rules
healthCheckNodePort
created and restricted to VPC CIDRnodePort
forhttp
andhttps
created and unrestricted (0.0.0.0/0
)Forced Update - Adjust
ports
by removingnodePort
We tried “tricking” the Controller into doing additional work by removing the
nodePort
attribute. This does result in a meaningful change being propagated to the Security Group however it also means that a new Target Group is created which requires the heath checks to initialize and results in traffic being dropped until that finishes.Resultant Security Group Ingress Rules
healthCheckNodePort
created and restricted to VPC CIDRnodePort
for modified has explicit ingress rules for IPs listed inloadBalancerSourceRanges
nodePort
s * still have* unrestricted0.0.0.0/0
rule presentUpdate - Append new IP to
loadBalancerSourceRanges
Adding in another IP to the list did not result in any changes
Resultant Security Group Ingress Rules
healthCheckNodePort
created and restricted to VPC CIDRnodePort
has ingress for only the initial IP that was defined in the previous step.Update - Remove all IPs
Finally we tried just deleting the
loadBalancerSourceRanges
all together. This resulted in no changes to the rules.Resultant Security Group Ingress Rules
healthCheckNodePort
created and restricted to VPC CIDRnodePort
has ingress for only the initial IP that was defined in the previous step.Logs
There are some log statements here that also indicate some issues. Specifically it seems the assessment of Additions/Removals is incorrect when adjustments to the
loadBalancerSourceRanges
occurs.Initial Creation
These seemed fine.
Update with new IP logs
No updates to the SG ID to remove the
0.0.0.0/0
rules or the introduction of our new rule.Removal of node port - Force of new ingress
Looked pretty good
Appending of new IP
Looks a bit strange, logs indicate it’s going to remove the existing IP (which should remain since we just appended another item) and it they should add the new rule as well.
Removing of IP (wasn’t propagated to SG ID in the first place)
This seems to indicate it would be deleting the incorrect IP however nothing actually occurred because nothing was ever created for IP #2
Removal of ALL IPs from
loadBalancerSourceRanges
This indicates a REMOVAL of
0.0.0.0/0
instead of a removal of the existing IPs and the CREATION of a ingress on0.0.0.0/0