ingress-nginx: Multiple controllers in a single namespace: duplicate resource names

NGINX Ingress controller version

NGINX Ingress controller
  Release:       v1.0.0
  Build:         041eb167c7bfccb1d1653f194924b0c5fd885e10
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.20.1
  • How was the ingress-nginx-controller installed:

Using Helm, multiple controllers, each in its own namespace.

# helm ls -A
NAME                   	NAMESPACE              	REVISION	UPDATED                             	STATUS  	CHART                  	APP VERSION
ingress-public-external	ingress-public-external	1       	2021-09-17 11:25:28.212212 +0100 BST	deployed	ingress-nginx-4.0.1    	1.0.0
ingress-public-internal	ingress-public-internal	1       	2021-09-17 11:25:04.133381 +0100 BST	deployed	ingress-nginx-4.0.1    	1.0.0

ingress-public-external:

# helm -n ingress-public-external get values ingress-public-external
USER-SUPPLIED VALUES:
controller:
  config:
    add-headers: ingress-public-external/ingress-public-external-custom-headers
    hsts: false
  extraArgs:
    controller-class: k8s.io/ingress-public-external
    ingress-class: ingress-public-external
  ingressClassResource:
    controllerValue: k8s.io/ingress-public-external
    default: false
    enabled: true
    name: ingress-public-external
  service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-resource-group: REDACTED
    externalTrafficPolicy: Local
    loadBalancerIP: REDACTED
fullnameOverride: ingress-public-external
# helm -n ingress-public-internal get values ingress-public-internal
USER-SUPPLIED VALUES:
controller:
  config:
    add-headers: ingress-public-internal/ingress-public-internal-custom-headers
    hsts: false
    use-forwarded-headers: true
  extraArgs:
    controller-class: k8s.io/ingress-public-internal
    ingress-class: ingress-public-internal
  ingressClassResource:
    controllerValue: k8s.io/ingress-public-internal
    default: false
    enabled: true
    name: ingress-public-internal
  service:
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-internal: "true"
    externalTrafficPolicy: Local
fullnameOverride: ingress-public-internal

Used terraform to install helm charts, but broadly this is equivalent of helm install -values file.yaml with the above contents.

  • Config maps:
# ingress-public-external
# k --namespace ingress-public-external get configmaps
NAME                                     DATA   AGE
ingress-controller-leader                0      35m  <== THIS
ingress-public-external-controller       2      35m
ingress-public-external-custom-headers   1      35m
kube-root-ca.crt                         1      35m
# ingress-public-internal
# k --namespace ingress-public-internal get configmaps
NAME                                     DATA   AGE
NAME                                     DATA   AGE
ingress-controller-leader                0      36m  <== THIS
ingress-public-internal-controller       3      36m
ingress-public-internal-custom-headers   1      36m
kube-root-ca.crt                         1      36m

What happened:

I wanted to have all ingress controllers in a single namespace. I use fullnameOverride with Helm to make sure all names are distinct for each controller.

What I noticed tho is that there is a Config Map being created called ingress-controller-leader by each Helm release, with hard-coded value. Of course, if I install mutiple controller charts into the same namespace, there will be a conflict right?

So in the meantime I have to have separate namespace for each controller.

What you expected to happen:

Using fullnameOverride should be applied to all resource names created by the Helm chart.

Workaround:

I notice that the name of the config map comes from this bit in values.yaml in the chart:

  ## Election ID to use for status update
  ##
  electionID: ingress-controller-leader

And so if I do specify an override for it when releasing helm chart, it does indeed change it to my own supplied value:

# snippet from terraform
fullnameOverride: "${var.name}"
controller:
  electionID: '${var.name}-leader'  # <== THIS
# k --namespace ingress-public-external get configmaps
NAME                                     DATA   AGE
ingress-controller-leader                0      58m  <== OLD ONE
ingress-public-external-controller       2      58m
ingress-public-external-custom-headers   1      59m
ingress-public-external-leader           0      12m  <== NEW ONE
kube-root-ca.crt                         1      59m

Notice that changing the electionID value creates new config map but does not remove the old one. I suspect it’s because it’s not being created as a resource in the chart but by /nginx-ingress-controller program during startup or something.

Questions

Additional questions I have:

  • Is the above behaviour on purpose? Surely fullNameOverride should also apply to this resource?
  • Wouldn’t it be better to create this election config map as part of the chart rather than get /nginx-ingress-controller to create it behind the scene?
  • Is it required to have unique value of electionID for each controller if they are in the same namespace? It does look so because the annotations in the config map point to the leader pod of the controller, so different controllers should use different map?
  • Is there anything else similar to this that we need to ensure to have unique name if we run in a same namespace?
  • Is it even a good idea to run mutiple controllers in a single namespace? 😃

/kind bug

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 3
  • Comments: 23 (10 by maintainers)

Commits related to this issue

Most upvoted comments

This is a real world scenario I’m facing @longwuyuan . The setup I have is:

  • Internal ingress controller (with internal load balancer).
  • External ingress controller (with external load balancer).
  • External ingress uisng External ingress controller. This terminates TLS and routes all traffic to the Internal ingress controller service. We do this here only once for all routes.
  • Multiple Internal app ingresses. These just configure HTTP routing without care for TLS termination.

And so initially I had both controllers in one namespace. But then I noticed this shared leader thing.

And because we will have more than one replica of both controllers in production, this clearly would be an issue.

So I was forced to put eveverything into separate namespaces. This complicated things slightly because since the internal ingress controller service is in another namespace, I had to create External service in order to route traffic correctly.

The fact that the leader config maps are also left behind when we uninstall the helm releases is also a bit not nice 😃

I’d still say this is a bug because fullNameOverride should apply to all objects created by the helm chart, and currently it’s missing one. Easy fix for this from what I can tell.


absolutely requires a developer to be clear about “What problem do we need to solve here”.

I think the problem here is “correct behaviour” and “lets not introduce incorrect behaviour in surprising ways”.

Nowhere in the documentation I saw a mention of “each controller must be in its own namespace”. So people will end up putting multiple controllers in one namespace and will hit this issue. And you know, having yet another thing to troubleshoot is not great, especially if it could be avoided either by documentation, or fixing this particular issue.

Hello, another real world scenario that we have:

Multiple Ingress Controllers in the same namespace used for different purposes, for various security & configuration reasons.
We discovered this issue recently by finding that after we did the upgrade to NGINX 1.0.0, we were only getting some metrics from a single NGINX Ingress Controller release.
Metrics like:

  • nginx_ingress_controller_ssl_expire_time_seconds
  • nginx_ingress_controller_leader_election_status

We then determined, using this thread and our own investigations, that since the 1.0.0 upgrade, we got a single ingress-controller-leader ConfigMap rather than 1 ConfigMap per release.

We solved this by having .Values.controller.electionID to be unique per NGINX Ingress Controller release (we did this by overriding the value in our Helm releases of each NGINX Ingress Controller).

This was rather annoying since it worked fine before the upgrade to 1.0.0.

For us: it does definitely feel like these resources, like the leader ConfigMap, that are being created per NGINX Ingress Controller release should be suffixed/prefixed with the .Release.Name of the NGINX Ingress Controller to ensure the resources are unique to each release.

I’ve pushed https://github.com/kubernetes/ingress-nginx/pull/9133 which based on some quick tests should be all that is needed.

@strongjz We ran into this problem as well, yes, after assign an unique electionID to each release of controllers in same namespace, the sync of load balancer address start working on all controllers

If assigning an unique electionID is a must for multiple releases of controllers in same namespace, should the default value of electionID in helm chart be some prefix or suffix with .Release.Name or ingress-nginx.controller.fullname or ingress-nginx.name instead of a fixed value in values,yaml ?

Fixed default value is kind of dangerous to production environments because we could easily break ingress synchronization by installing another release but just forgot to change the value of electionID

.Release.Name should be better choice since usually we install the different set of ingress controller with different release name

Changing the electionID worked for us too. We have the case where we are deploying two controllers via Helm in the same release. We do this by including the dependency twice in our Chart.yaml but with differing aliases, this all works well aside from the electionID. In the same vein as @dogzzdogzz’s comment I think auto-generating the electionID would be good and having had a quick look at the helm, the current ingress-nginx.fullname value with a suffix of leader should work.

I subscribe to what @ppanyukov said This used to work fine in previous versions. The new behavior introduced a limitation that is not justified. I’m managing multiple clusters with this setup (as expressed by @ppanyukov): one controller for public access and one for private access. They’re deployed in the same namespace because they share namespaced resources like secrets and the like.

But I think that kind of a use case is not very common. If you get down to the practical implications of multiple helm installs of one controller in the same namespace, it is a rabbit hole for deep diving, and absolutely requires a developer to be clear about “What problem do we need to solve here”.

The above statement is flawed by the same assumption, that this scenario is unlikely to be used. It’s actually quite common. I think this is a regression since this was working perfectly fine before.

EDIT: I will be submitting a PR for automatic electionID set via chart parameters as a solution candidate for the issue.

/remove-kind bug /kind support

image

As you can see in the screenshot that 2 pods seem to be running in the same namespace and each pod belongs to a different helm release. ;

% helm ls -A
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS    CHART                   APP VERSION
ing0            ingress         1               2021-08-31 15:34:04.516409254 +0530 IST deployed  ingress-nginx-4.0.1     1.0.0      
ingcontroller2  issue-7652      1               2021-09-19 21:51:40.214625782 +0530 IST deployed  ingress-nginx-4.0.1     1.0.0      
ingcontroller3  issue-7652      1               2021-09-19 21:53:13.801245323 +0530 IST deployed  ingress-nginx-4.0.1     1.0.0      
[~] 

Some questions you have asked are interesting. But first and foremost, we must describe a problem that we want to solve.

Currently, with replica = 1 for each controller, I don’t foresee an election and that is visible in the fact that the configmap in questions has no data field ;

% k -n issue-7652 get cm
NAME                                      DATA   AGE
ingcontroller2-ingress-nginx-controller   0      12m
ingcontroller3-ingress-nginx-controller   0      10m
ingress-controller-leader                 0      12m
kube-root-ca.crt                          1      13m
[~] 
% k -n issue-7652 get cm ingress-controller-leader -o yaml
apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"ingcontroller2-ingress-nginx-controller-767b544b4-4d645","leaseDurationSeconds":30,"acquireTime":"2021-09-19T16:21:44Z","renewTime":"2021-09-19T16:34:19Z","leaderTransitions":0}'
  creationTimestamp: "2021-09-19T16:21:44Z"
  name: ingress-controller-leader
  namespace: issue-7652
  resourceVersion: "814389"
  uid: 3738faff-bf06-4d99-9c48-e155ae511666

I ack that you want 2 configmaps, if there are 2 controllers installed in the same namespace. But I think that kind of a use case is not very common. If you get down to the practical implications of multiple helm installs of one controller in the same namespace, it is a rabbit hole for deep diving, and absolutely requires a developer to be clear about “What problem do we need to solve here”.

My suggestion is please put real workloads of your real-world use applications in a cluster that has multiple ingress-nginx controllers in one namespace. Expose your workloads using the multiple ingress-controllers in one namespace. Get data on the real world use case of your workloads with multiple ingress controllers in one namespace.

If it becomes a absolute requirement for your team and your org to run multiple ingress-nginx controllers in one namespace, please update this issue with what you have configured and what problem you want to solve. That way a developer here will have more practical data to reproduce a problem you face, in your use case.

Otherwise you can always look at the source code to know the details of election and PRs are welcome to make changes either to the Chart template or to the code itself.

/triage needs-informtion