ingress-nginx: Multiple controllers in a single namespace: duplicate resource names
NGINX Ingress controller version
NGINX Ingress controller
Release: v1.0.0
Build: 041eb167c7bfccb1d1653f194924b0c5fd885e10
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.20.1
- How was the ingress-nginx-controller installed:
Using Helm, multiple controllers, each in its own namespace.
# helm ls -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ingress-public-external ingress-public-external 1 2021-09-17 11:25:28.212212 +0100 BST deployed ingress-nginx-4.0.1 1.0.0
ingress-public-internal ingress-public-internal 1 2021-09-17 11:25:04.133381 +0100 BST deployed ingress-nginx-4.0.1 1.0.0
ingress-public-external:
# helm -n ingress-public-external get values ingress-public-external
USER-SUPPLIED VALUES:
controller:
config:
add-headers: ingress-public-external/ingress-public-external-custom-headers
hsts: false
extraArgs:
controller-class: k8s.io/ingress-public-external
ingress-class: ingress-public-external
ingressClassResource:
controllerValue: k8s.io/ingress-public-external
default: false
enabled: true
name: ingress-public-external
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: REDACTED
externalTrafficPolicy: Local
loadBalancerIP: REDACTED
fullnameOverride: ingress-public-external
# helm -n ingress-public-internal get values ingress-public-internal
USER-SUPPLIED VALUES:
controller:
config:
add-headers: ingress-public-internal/ingress-public-internal-custom-headers
hsts: false
use-forwarded-headers: true
extraArgs:
controller-class: k8s.io/ingress-public-internal
ingress-class: ingress-public-internal
ingressClassResource:
controllerValue: k8s.io/ingress-public-internal
default: false
enabled: true
name: ingress-public-internal
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
externalTrafficPolicy: Local
fullnameOverride: ingress-public-internal
Used terraform to install helm charts, but broadly this is equivalent of helm install -values file.yaml
with the above contents.
- Config maps:
# ingress-public-external
# k --namespace ingress-public-external get configmaps
NAME DATA AGE
ingress-controller-leader 0 35m <== THIS
ingress-public-external-controller 2 35m
ingress-public-external-custom-headers 1 35m
kube-root-ca.crt 1 35m
# ingress-public-internal
# k --namespace ingress-public-internal get configmaps
NAME DATA AGE
NAME DATA AGE
ingress-controller-leader 0 36m <== THIS
ingress-public-internal-controller 3 36m
ingress-public-internal-custom-headers 1 36m
kube-root-ca.crt 1 36m
What happened:
I wanted to have all ingress controllers in a single namespace. I use fullnameOverride
with Helm to make sure all names are distinct for each controller.
What I noticed tho is that there is a Config Map being created called ingress-controller-leader
by each Helm release, with hard-coded value. Of course, if I install mutiple controller charts into the same namespace, there will be a conflict right?
So in the meantime I have to have separate namespace for each controller.
What you expected to happen:
Using fullnameOverride
should be applied to all resource names created by the Helm chart.
Workaround:
I notice that the name of the config map comes from this bit in values.yaml
in the chart:
## Election ID to use for status update
##
electionID: ingress-controller-leader
And so if I do specify an override for it when releasing helm chart, it does indeed change it to my own supplied value:
# snippet from terraform
fullnameOverride: "${var.name}"
controller:
electionID: '${var.name}-leader' # <== THIS
# k --namespace ingress-public-external get configmaps
NAME DATA AGE
ingress-controller-leader 0 58m <== OLD ONE
ingress-public-external-controller 2 58m
ingress-public-external-custom-headers 1 59m
ingress-public-external-leader 0 12m <== NEW ONE
kube-root-ca.crt 1 59m
Notice that changing the electionID
value creates new config map but does not remove the old one. I suspect it’s because it’s not being created as a resource in the chart but by /nginx-ingress-controller
program during startup or something.
Questions
Additional questions I have:
- Is the above behaviour on purpose? Surely
fullNameOverride
should also apply to this resource? - Wouldn’t it be better to create this election config map as part of the chart rather than get
/nginx-ingress-controller
to create it behind the scene? - Is it required to have unique value of
electionID
for each controller if they are in the same namespace? It does look so because the annotations in the config map point to the leader pod of the controller, so different controllers should use different map? - Is there anything else similar to this that we need to ensure to have unique name if we run in a same namespace?
- Is it even a good idea to run mutiple controllers in a single namespace? 😃
/kind bug
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 3
- Comments: 23 (10 by maintainers)
Commits related to this issue
- #7652 - Updated Helm chart to use the fullname for the electionID if not specified. (#9133) * Automatically generate electionID from the fullname or use the set value. * Updated the chart readme to ... — committed to kubernetes/ingress-nginx by FutureMatt 2 years ago
- #7652 - Updated Helm chart to use the fullname for the electionID if not specified. (#9133) * Automatically generate electionID from the fullname or use the set value. * Updated the chart readme to ... — committed to jaehnri/ingress-nginx by FutureMatt 2 years ago
- [EOS-10400] Update main branch with latest tag of the fork 1.5.1 (#14) * change sha e2etestrunner and echoserver (#8740) * Bump github.com/stretchr/testify from 1.7.2 to 1.7.5 (#8751) Bumps [gi... — committed to Stratio/ingress-nginx by Alvaro-Campesino a year ago
- [EOS-10400] Update main branch with latest tag of the fork 1.5.1 (#14) * change sha e2etestrunner and echoserver (#8740) * Bump github.com/stretchr/testify from 1.7.2 to 1.7.5 (#8751) Bumps [gi... — committed to Alvaro-Campesino/ingress-nginx-k8s by Alvaro-Campesino a year ago
This is a real world scenario I’m facing @longwuyuan . The setup I have is:
And so initially I had both controllers in one namespace. But then I noticed this shared leader thing.
And because we will have more than one replica of both controllers in production, this clearly would be an issue.
So I was forced to put eveverything into separate namespaces. This complicated things slightly because since the internal ingress controller service is in another namespace, I had to create External service in order to route traffic correctly.
The fact that the leader config maps are also left behind when we uninstall the helm releases is also a bit not nice 😃
I’d still say this is a bug because
fullNameOverride
should apply to all objects created by the helm chart, and currently it’s missing one. Easy fix for this from what I can tell.I think the problem here is “correct behaviour” and “lets not introduce incorrect behaviour in surprising ways”.
Nowhere in the documentation I saw a mention of “each controller must be in its own namespace”. So people will end up putting multiple controllers in one namespace and will hit this issue. And you know, having yet another thing to troubleshoot is not great, especially if it could be avoided either by documentation, or fixing this particular issue.
Hello, another real world scenario that we have:
Multiple Ingress Controllers in the same namespace used for different purposes, for various security & configuration reasons.
We discovered this issue recently by finding that after we did the upgrade to NGINX
1.0.0
, we were only getting some metrics from a single NGINX Ingress Controller release.Metrics like:
nginx_ingress_controller_ssl_expire_time_seconds
nginx_ingress_controller_leader_election_status
We then determined, using this thread and our own investigations, that since the
1.0.0
upgrade, we got a singleingress-controller-leader
ConfigMap rather than 1 ConfigMap per release.We solved this by having
.Values.controller.electionID
to be unique per NGINX Ingress Controller release (we did this by overriding the value in our Helm releases of each NGINX Ingress Controller).This was rather annoying since it worked fine before the upgrade to
1.0.0
.For us: it does definitely feel like these resources, like the leader ConfigMap, that are being created per NGINX Ingress Controller release should be suffixed/prefixed with the
.Release.Name
of the NGINX Ingress Controller to ensure the resources are unique to each release.I’ve pushed https://github.com/kubernetes/ingress-nginx/pull/9133 which based on some quick tests should be all that is needed.
@strongjz We ran into this problem as well, yes, after assign an unique electionID to each release of controllers in same namespace, the sync of load balancer address start working on all controllers
If assigning an unique electionID is a must for multiple releases of controllers in same namespace, should the default value of electionID in helm chart be some prefix or suffix with
.Release.Name
oringress-nginx.controller.fullname
oringress-nginx.name
instead of a fixed value in values,yaml ?Fixed default value is kind of dangerous to production environments because we could easily break ingress synchronization by installing another release but just forgot to change the value of electionID
.Release.Name
should be better choice since usually we install the different set of ingress controller with different release nameChanging the electionID worked for us too. We have the case where we are deploying two controllers via Helm in the same release. We do this by including the dependency twice in our
Chart.yaml
but with differing aliases, this all works well aside from the electionID. In the same vein as @dogzzdogzz’s comment I think auto-generating the electionID would be good and having had a quick look at the helm, the currentingress-nginx.fullname
value with a suffix ofleader
should work.I subscribe to what @ppanyukov said This used to work fine in previous versions. The new behavior introduced a limitation that is not justified. I’m managing multiple clusters with this setup (as expressed by @ppanyukov): one controller for public access and one for private access. They’re deployed in the same namespace because they share namespaced resources like secrets and the like.
The above statement is flawed by the same assumption, that this scenario is unlikely to be used. It’s actually quite common. I think this is a regression since this was working perfectly fine before.
EDIT: I will be submitting a PR for automatic electionID set via chart parameters as a solution candidate for the issue.
/remove-kind bug /kind support
As you can see in the screenshot that 2 pods seem to be running in the same namespace and each pod belongs to a different helm release. ;
Some questions you have asked are interesting. But first and foremost, we must describe a problem that we want to solve.
Currently, with replica = 1 for each controller, I don’t foresee an election and that is visible in the fact that the configmap in questions has no data field ;
I ack that you want 2 configmaps, if there are 2 controllers installed in the same namespace. But I think that kind of a use case is not very common. If you get down to the practical implications of multiple helm installs of one controller in the same namespace, it is a rabbit hole for deep diving, and absolutely requires a developer to be clear about “What problem do we need to solve here”.
My suggestion is please put real workloads of your real-world use applications in a cluster that has multiple ingress-nginx controllers in one namespace. Expose your workloads using the multiple ingress-controllers in one namespace. Get data on the real world use case of your workloads with multiple ingress controllers in one namespace.
If it becomes a absolute requirement for your team and your org to run multiple ingress-nginx controllers in one namespace, please update this issue with what you have configured and what problem you want to solve. That way a developer here will have more practical data to reproduce a problem you face, in your use case.
Otherwise you can always look at the source code to know the details of election and PRs are welcome to make changes either to the Chart template or to the code itself.
/triage needs-informtion