ingress-nginx: enabling internal LB is not usable because needs 2 ingressclasses on one controller
Currently when you set controller.service.internal.enabled=true
it creates 1 Deployment/Daemonset and 1 IngressClass (1 ServiceMonitor if enabled) but 2 Services
Since setting internal.enabled=true
creates 2 services:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 172.20.196.30 LB1-external.elb.us-east-1.amazonaws.com 80:30731/TCP,443:30157/TCP 196d
ingress-nginx-controller-admission ClusterIP 172.20.131.47 <none> 443/TCP 196d
ingress-nginx-controller-internal LoadBalancer 172.20.67.174 LB2-internal.elb.us-east-1.amazonaws.com 80:32359/TCP,443:32765/TCP 163m
ingress-nginx-controller-metrics LoadBalancer 172.20.88.89 LB3-internal.elb.us-east-1.amazonaws.com 10254:32552/TCP 163m
You currently have no way to distinguish when using something like external-dns
which service to use since the ingressClassName
is the same being it’s a single controller. So either would need a way for DNS to understand which service to bind (CNAME) to since both load balancers should route Ingress resources fine or you’d need 2 controllers/ingressclass to be created.
If creating DNS manually you could setup your dns to point to the ingress-nginx-controller
DNS or the ingress-nginx-controller-internal
but no way to automate this as of now without some sort of annotation which will select the proper SVC dns to use.
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 2
- Comments: 31 (16 by maintainers)
@anuja-kulkarni Look at the next section in the same docs page