ingress-nginx: IP is not set on Ingress when using Multiple nginx Ingress controller in the same namespace
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
NGINX Ingress controller
Release: v1.1.1
Build: a17181e43ec85534a6fea968d95d019c5a4bc8cf
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.19.9
Kubernetes version (use kubectl version
): v1.22.2
Environment:
- Cloud provider or hardware configuration: AKS
- Install tools: Terraform
- Basic cluster related info:
k version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:31:32Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-18T19:30:35Z", GoVersion:"go1.16.10",
Compiler:"gc", Platform:"linux/amd64"}
k get nodes
NAME STATUS ROLES AGE VERSION
aks-applications-18846254-vmss000000 Ready agent 13d v1.22.4
aks-applications-18846254-vmss000001 Ready agent 13d v1.22.4
aks-default-30170034-vmss000000 Ready agent 13d v1.22.4
- How was the ingress-nginx-controller installed:
- If helm was used then please show output of
helm ls -A | grep -i ingress
- If helm was used then please show output of
ingress-nginx ingress-nginx 3 2022-02-02 08:57:07.025319779 +0000 UTC deployed ingress-nginx-4.0.15 1.1.1
ingress-nginx-external ingress-nginx 2 2022-02-02 10:08:11.601806206 +0000 UTC deployed ingress-nginx-4.0.15 1.1.1
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
╰─ helm get values ingress-nginx -n ingress-nginx
USER-SUPPLIED VALUES:
controller:
autoscaling:
enabled: true
maxReplicas: 11
minReplicas: 2
targetCPUUtilizationPercentage: 1000
targetMemoryUtilizationPercentage: 400
config:
log-format-escape-json: "true"
log-format-upstream: '{ "@timestamp": "$time_iso8601", "network": {"forwarded_ip":
"$http_x_forwarded_for"}, "client": {"ip": "$remote_addr"}, "user":{"name":
"$remote_user"}, "user_agent":{"original": "$http_user_agent"}, "http":{ "version":
"$server_protocol", "request":{ "body":{"bytes": $body_bytes_sent}, "bytes":
$request_length, "method": "$request_method", "referrer": "$http_referer" },
"response":{ "body": {"bytes": $body_bytes_sent}, "bytes": $bytes_sent, "status_code":
"$status", "time": $request_time } }, "upstream": { "bytes": $upstream_response_length,
"status_code":"$upstream_status", "time":$upstream_response_time, "address":
"$upstream_addr", "name": "$proxy_upstream_name" }, "url_info":{ "domain":"$host",
"path":"$uri", "query":"$args", "original":"$request_uri" } }'
use-forwarded-headers: "true"
ingressClassResource:
controllerValue: k8s.io/internal-ingress-nginx
default: false
enabled: true
name: internal-nginx
resources:
limits:
cpu: 1000m
memory: 2048Mi
requests:
cpu: 100m
memory: 256Mi
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
externalTrafficPolicy: Local
labels:
type: internal
╰─ helm get values ingress-nginx-external -n ingress-nginx
USER-SUPPLIED VALUES:
controller:
autoscaling:
enabled: true
maxReplicas: 5
minReplicas: 2
targetCPUUtilizationPercentage: 1000
targetMemoryUtilizationPercentage: 400
config:
log-format-escape-json: "true"
log-format-upstream: '{ "@timestamp": "$time_iso8601", "network": {"forwarded_ip":
"$http_x_forwarded_for"}, "client": {"ip": "$remote_addr"}, "user":{"name":
"$remote_user"}, "user_agent":{"original": "$http_user_agent"}, "http":{ "version":
"$server_protocol", "request":{ "body":{"bytes": $body_bytes_sent}, "bytes":
$request_length, "method": "$request_method", "referrer": "$http_referer" },
"response":{ "body": {"bytes": $body_bytes_sent}, "bytes": $bytes_sent, "status_code":
"$status", "time": $request_time } }, "upstream": { "bytes": $upstream_response_length,
"status_code":"$upstream_status", "time":$upstream_response_time, "address":
"$upstream_addr", "name": "$proxy_upstream_name" }, "url_info":{ "domain":"$host",
"path":"$uri", "query":"$args", "original":"$request_uri" } }'
use-forwarded-headers: "true"
ingressClassResource:
controllerValue: k8s.io/external-ingress-nginx
default: false
enabled: true
name: external-nginx
resources:
limits:
cpu: 1000m
memory: 2048Mi
requests:
cpu: 100m
memory: 256Mi
service:
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "false"
externalTrafficPolicy: Local
labels:
type: external
-
Current State of the controller: Both ingress controllers are healthy and detect when a new ingress is created
-
Current state of ingress object, if applicable:
tyk int-ing internal-nginx toto.internal.dns 10.10.10.10 80, 443 6d16h
tyk external-ing external-nginx toto.extern.dns 80, 443 18h
For internal ingress controller it successfully assigned an IP address but not for external ingress controller
What happened:
I don’t have any errors in both ingress controller, but for the external one it doesn’t assign any ip address whereas the Service is created as LoadBalancer
type and a public ip is assigned. I can reach the service but External-dns is not able to create the record as no IP is assigned to my ingresses using external ingress controller.
I tried to separate my ingress controller in two different namespace and it works
What you expected to happen: Did I miss something on the configuration of the ingress controller ? How to reproduce it: Create 2 nginx controller (with a different ingress-class) in the same namespace
Anything else we need to know: Example of ingress deployed
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/acme-challenge-type: dns01
cert-manager.io/cluster-issuer: letsencrypt-dns-prod-nginx
labels:
app: test-ing2
type: internal
name: test-ing2
namespace: default
spec:
ingressClassName: internal-nginx
rules:
- host: test.internal.dns
http:
paths:
- backend:
service:
name: svc
port:
name: http
path: /
pathType: ImplementationSpecific
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/acme-challenge-type: dns01
cert-manager.io/cluster-issuer: letsencrypt-dns-prod-nginx
labels:
app: test-ing
type: external
name: test-ing
namespace: default
spec:
ingressClassName: external-nginx
rules:
- host: test.externa.dns
http:
paths:
- backend:
service:
name: svc
port:
name: http
path: /
pathType: ImplementationSpecific
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Reactions: 15
- Comments: 20 (8 by maintainers)
Just following up on this issue. We are moving to from the beta1 api to the v1 api and using the updated ingressClassName and have re-factored things and tested so we can move to kubernetes 1.22, we however have multiple ingress in the same namespace. We did find a workaround that works, but I am uncomfortable with the solution: https://github.com/kubernetes/ingress-nginx/issues/7890 The fix was to specify different collection id, like so:
Basically using different election-id works. We however don’t like this as it might break in the future. We would rather do what you recommend. Is the recommendation to not wait for a fix and use different namespaces when using multiple ingress controllers then for the v1 api? I updated the above, I had a syntax error and mis-spelled election-id instead of electionID, sorry to anyone who used the wrong syntax.
Thanks,
Collin
Just stumbled upon this too! Here is a quick and dirty way to setup a second ingress controller using helm according to the election id workaround:
+1 facing the same issue with v1.1.1
helm install one-ingress ingress-nginx/ingress-nginx --version 4.3.0 --set controller.service.type=LoadBalancer --set controller.scope.enabled=true --set-string controller.ingressClassResource.name=one-ingress --set controller.ingressClassResource.default=false --set controller.ingressClassResource.controllerValue=k8s.io/ingress-one --set controller.electionID=one-ingress-controller-leader
helm install two-ingress ingress-nginx/ingress-nginx --version 4.3.0 --set controller.service.type=LoadBalancer --set controller.scope.enabled=true --set-string controller.ingressClassResource.name=two-ingress --set controller.ingressClassResource.default=false --set controller.ingressClassResource.controllerValue=k8s.io/ingress-two --set controller.electionID=two-ingress-controller-leader
The above command will install two nginx ingress in the single cluster without any problem…
Hi @longwuyuan
We’re experiencing the same problem as described by @AlexisDuf. It seems the workaround provided by @collinhayden works for us too. However, I don’t see why it’s so strange to run two controllers in the same namespace? At least I’d expect the documentation to mention that it’s currently not possible (cfr. https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/). We spent many hours trying to troubleshoot the root cause of this issue. We personally use multiple controllers in our development setups, so that developers can bypass authentication when desired, it makes sense for us to run these in the same namespace as that’s easier in our GitOps setup. Hope to see an update on this in the future.
Kind regards! Garnaalkrokette
Please install each instance of the ingress-nginx controller in its own namespace for now. This is a known problem and has much larger implications than possible on a typed conversation. Thanks.
+1
+1 To the affected users that spent hours trying to figure out why the external load balancer was not being updated: We had no leader elected for the corresponding ingress class, and in our specific case, we have just one namespace for different ingress controllers (so we can use nginx, AWS ALB, Istio, etc)
Our solution was to customise the
controller.electionID
value in Helm ChartWe were hit too - looks like this is a regression from the way it used to work, e.g. see https://github.com/kubernetes/ingress-nginx/pull/613 which used the ingressClass to make elections unique. Somewhere in the refactoring this was lost.
FWIW we have four different nginx’s in a namespace with different network security arrangements for each of them.
Thanks for info. There was a big refactoring due to a need to handle the deprecation of ingress api v1beta1. From K8S v1.22 onwards ingreddClassName is the preferred field inclusion in a ingress object. Long story short, you will see some info related, I’m the controller pod logs.
/kind support /remove-kind bug
Thanks, ; Long
On Wed, 2 Feb, 2022, 6:52 PM Alexis Dufour, @.***> wrote: