ingress-nginx: --publish-service not working with manually curated endpoints list

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

k exec -it ingress-nginx-controller-mrsfv  -- sh -c '/nginx-ingress-controller --version'
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.0.0
  Build:         041eb167c7bfccb1d1653f194924b0c5fd885e10
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.20.1

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.6", GitCommit:"8a62859e515889f07e3e3be6a1080413f17cf2c3", GitTreeState:"clean", BuildDate:"2021-04-15T03:19:55Z", GoVersion:"go1.15.10", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: VMs
  • OS (e.g. from /etc/os-release): RancherOS
  • Kernel (e.g. uname -a):
  • Install tools:
    • Rancher / rke
  • Basic cluster related info:
    • kubectl version -> see above
kubectl get nodes -o wide
kubectl get nodes -o wide
NAME                                     STATUS   ROLES          AGE    VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE           KERNEL-VERSION     CONTAINER-RUNTIME
kubernetes-dev-1-etcd-1.int     Ready    etcd           103d   v1.20.6   10.72.13.225   <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
kubernetes-dev-1-master-1.int   Ready    controlplane   103d   v1.20.6   10.72.13.227   <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
kubernetes-dev-1-master-2.int   Ready    controlplane   94d    v1.20.6   10.72.13.90    <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
kubernetes-dev-1-node-1.int     Ready    worker         103d   v1.20.6   10.72.13.272   <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
kubernetes-dev-1-node-2.int     Ready    worker         103d   v1.20.6   10.72.13.229   <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
kubernetes-dev-1-node-3.int     Ready    worker         103d   v1.20.6   10.72.13.91    <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
kubernetes-dev-1-node-4.int     Ready    worker         103d   v1.20.6   10.72.13.92    <none>        RancherOS v1.5.5   4.14.138-rancher   docker://19.3.5
  • How was the ingress-nginx-controller installed:
    • Resources are generated from helm chart v4.0.5 with following values:
  controller:
    kind: DaemonSet
    dnsPolicy: ClusterFirstWithHostNet
    hostNetwork: true
    service:
      enabled: false
    publishService:
      enabled: false
  • Current State of the controller:
kubectl -n <ingresscontrollernamespace> get all -o wide
k get all -o wide
NAME                                       READY   STATUS      RESTARTS   AGE     IP               NODE                                   NOMINATED NODE   READINESS GATES
pod/ingress-nginx-admission-create-q4wbl   0/1     Completed   0          9m25s   10.42.36.137     kubernetes-dev-1-node-3.int   <none>           <none>
pod/ingress-nginx-admission-patch-m76dc    0/1     Completed   1          9m25s   10.42.36.135     kubernetes-dev-1-node-3.int   <none>           <none>
pod/ingress-nginx-controller-6p8m7         1/1     Running     0          2m6s    10.72.13.272   kubernetes-dev-1-node-1.int   <none>           <none>
pod/ingress-nginx-controller-6pjt7         1/1     Running     0          2m6s    10.72.13.92    kubernetes-dev-1-node-4.int   <none>           <none>
pod/ingress-nginx-controller-9wwrs         1/1     Running     0          2m6s    10.72.13.91    kubernetes-dev-1-node-3.int   <none>           <none>
pod/ingress-nginx-controller-zsl5n         1/1     Running     0          2m6s    10.72.13.229   kubernetes-dev-1-node-2.int   <none>           <none>

NAME                                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE     SELECTOR
service/ingress-nginx-controller-admission   ClusterIP   10.43.124.205   <none>        443/TCP   9m26s   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/test                                 ClusterIP   None            <none>        <none>    9m24s   app=test

NAME                                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE    CONTAINERS   IMAGES                                                SELECTOR
daemonset.apps/ingress-nginx-controller   4         4         4       4            4           kubernetes.io/os=linux   2m6s   controller   harbor.int/ingress-nginx/controller:v1.0.3   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx

NAME                                       COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES                                                        SELECTOR
job.batch/ingress-nginx-admission-create   1/1           3s         9m26s   create       harbor.int/ingress-nginx/kube-webhook-certgen:v1.0   controller-uid=1d9d6180-b0f5-461a-8877-06dd35bc45bd
job.batch/ingress-nginx-admission-patch    1/1           4s         9m26s   patch        harbor.int/ingress-nginx/kube-webhook-certgen:v1.0   controller-uid=d8f44d47-92ba-4e23-8298-2a6d29e49ad9

NAME                                                                               SCANNER      AGE    FAIL   WARN   INFO   PASS
ciskubebenchreport.aquasecurity.github.io/kubernetes-dev-1-master-1.int   kube-bench   101d   29     46     0      40
ciskubebenchreport.aquasecurity.github.io/kubernetes-dev-1-node-1.int     kube-bench   101d   10     33     0      4
ciskubebenchreport.aquasecurity.github.io/kubernetes-dev-1-node-2.int     kube-bench   101d   10     33     0      4
ciskubebenchreport.aquasecurity.github.io/kubernetes-dev-1-node-3.int     kube-bench   101d   10     33     0      4
ciskubebenchreport.aquasecurity.github.io/kubernetes-dev-1-node-4.int     kube-bench   101d   10     33     0      4
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name:         ingress-nginx-admission-create-q4wbl
Namespace:    ingress-nginx
Priority:     0
Node:         kubernetes-dev-1-node-3.int/10.72.13.91
Start Time:   Mon, 11 Oct 2021 16:72:26 +0200
Labels:       app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/version=1.0.3
              controller-uid=1d9d6180-b0f5-461a-8877-06dd35bc45bd
              helm.sh/chart=ingress-nginx-4.0.5
              job-name=ingress-nginx-admission-create
Annotations:  cni.projectcalico.org/podIP: 
              cni.projectcalico.org/podIPs: 
Status:       Succeeded
IP:           10.42.36.137
IPs:
  IP:           10.42.36.137
Controlled By:  Job/ingress-nginx-admission-create
Containers:
  create:
    Container ID:  docker://263cbaecaa72006036be3971baed9ad62f529e90de01e3508200c568b13f9eae
    Image:         harbor.int/ingress-nginx/kube-webhook-certgen:v1.0
    Image ID:      docker-pullable://harbor.int/ingress-nginx/kube-webhook-certgen@sha256:fbe44fb846ad3e7fee5b4d14c63da0bc3e884506139d5d4860d4dca763d94cc3
    Port:          <none>
    Host Port:     <none>
    Args:
      create
      --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
      --namespace=$(POD_NAMESPACE)
      --secret-name=ingress-nginx-admission
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 11 Oct 2021 16:72:27 +0200
      Finished:     Mon, 11 Oct 2021 16:72:72 +0200
    Ready:          False
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-mfrxs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  ingress-nginx-admission-token-mfrxs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-mfrxs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  12m   default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-admission-create-q4wbl to kubernetes-dev-1-node-3.int
  Normal  Pulled     12m   kubelet            Container image "harbor.int/ingress-nginx/kube-webhook-certgen:v1.0" already present on machine
  Normal  Created    12m   kubelet            Created container create
  Normal  Started    12m   kubelet            Started container create


Name:         ingress-nginx-admission-patch-m76dc
Namespace:    ingress-nginx
Priority:     0
Node:         kubernetes-dev-1-node-3.int/10.72.13.91
Start Time:   Mon, 11 Oct 2021 16:72:26 +0200
Labels:       app.kubernetes.io/component=admission-webhook
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/version=1.0.3
              controller-uid=d8f44d47-92ba-4e23-8298-2a6d29e49ad9
              helm.sh/chart=ingress-nginx-4.0.5
              job-name=ingress-nginx-admission-patch
Annotations:  cni.projectcalico.org/podIP: 
              cni.projectcalico.org/podIPs: 
Status:       Succeeded
IP:           10.42.36.135
IPs:
  IP:           10.42.36.135
Controlled By:  Job/ingress-nginx-admission-patch
Containers:
  patch:
    Container ID:  docker://ccf3ff4e6a0debb32d62ba0263712267ac4f0459b8773146a51857fc322e4934
    Image:         harbor.int/ingress-nginx/kube-webhook-certgen:v1.0
    Image ID:      docker-pullable://harbor.int/ingress-nginx/kube-webhook-certgen@sha256:fbe44fb846ad3e7fee5b4d14c63da0bc3e884506139d5d4860d4dca763d94cc3
    Port:          <none>
    Host Port:     <none>
    Args:
      patch
      --webhook-name=ingress-nginx-admission
      --namespace=$(POD_NAMESPACE)
      --patch-mutating=false
      --secret-name=ingress-nginx-admission
      --patch-failure-policy=Fail
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Mon, 11 Oct 2021 16:72:72 +0200
      Finished:     Mon, 11 Oct 2021 16:72:72 +0200
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Mon, 11 Oct 2021 16:72:27 +0200
      Finished:     Mon, 11 Oct 2021 16:72:27 +0200
    Ready:          False
    Restart Count:  1
    Environment:
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-admission-token-mfrxs (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  ingress-nginx-admission-token-mfrxs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission-token-mfrxs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                From               Message
  ----    ------     ----               ----               -------
  Normal  Scheduled  12m                default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-admission-patch-m76dc to kubernetes-dev-1-node-3.int
  Normal  Pulled     12m (x2 over 12m)  kubelet            Container image "harbor.int/ingress-nginx/kube-webhook-certgen:v1.0" already present on machine
  Normal  Created    12m (x2 over 12m)  kubelet            Created container patch
  Normal  Started    12m (x2 over 12m)  kubelet            Started container patch


Name:         ingress-nginx-controller-6p8m7
Namespace:    ingress-nginx
Priority:     0
Node:         kubernetes-dev-1-node-1.int/10.72.13.272
Start Time:   Mon, 11 Oct 2021 16:35:53 +0200
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-revision-hash=79d5b86f4d
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.72.13.272
IPs:
  IP:           10.72.13.272
Controlled By:  DaemonSet/ingress-nginx-controller
Containers:
  controller:
    Container ID:  docker://3c0788ca01bf132a13ec7a13cf08b1f85664bdd042fa214faab5ebef4066e8d6
    Image:         harbor.int/ingress-nginx/controller:v1.0.3
    Image ID:      docker-pullable://harbor.int/ingress-nginx/controller@sha256:405b7d6ed237d8d485962e6791ebf70e8f50ef97361dfc6fa7f81ddbcb47d788
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 8443/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --publish-service=ingress-nginx/test
    State:          Running
      Started:      Mon, 11 Oct 2021 16:35:54 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-6p8m7 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-7dv54 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-7dv54:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-7dv54
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age    From                      Message
  ----     ------            ----   ----                      -------
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Normal   Scheduled         4m52s  default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-6p8m7 to kubernetes-dev-1-node-1.int
  Normal   Pulled            4m52s  kubelet                   Container image "harbor.int/ingress-nginx/controller:v1.0.3" already present on machine
  Normal   Created           4m52s  kubelet                   Created container controller
  Normal   Started           4m52s  kubelet                   Started container controller
  Normal   RELOAD            4m50s  nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:         ingress-nginx-controller-6pjt7
Namespace:    ingress-nginx
Priority:     0
Node:         kubernetes-dev-1-node-4.int/10.72.13.92
Start Time:   Mon, 11 Oct 2021 16:36:06 +0200
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-revision-hash=79d5b86f4d
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.72.13.92
IPs:
  IP:           10.72.13.92
Controlled By:  DaemonSet/ingress-nginx-controller
Containers:
  controller:
    Container ID:  docker://665cce86b78a7f03bfa3938489acd8094c92165c5036832a3432427d5689b550
    Image:         harbor.int/ingress-nginx/controller:v1.0.3
    Image ID:      docker-pullable://harbor.int/ingress-nginx/controller@sha256:405b7d6ed237d8d485962e6791ebf70e8f50ef97361dfc6fa7f81ddbcb47d788
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 8443/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --publish-service=ingress-nginx/test
    State:          Running
      Started:      Mon, 11 Oct 2021 16:36:07 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-6pjt7 (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-7dv54 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-7dv54:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-7dv54
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age    From                      Message
  ----     ------            ----   ----                      -------
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Normal   Scheduled         4m40s  default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-6pjt7 to kubernetes-dev-1-node-4.int
  Normal   Pulled            4m39s  kubelet                   Container image "harbor.int/ingress-nginx/controller:v1.0.3" already present on machine
  Normal   Created           4m39s  kubelet                   Created container controller
  Normal   Started           4m39s  kubelet                   Started container controller
  Normal   RELOAD            4m37s  nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:         ingress-nginx-controller-9wwrs
Namespace:    ingress-nginx
Priority:     0
Node:         kubernetes-dev-1-node-3.int/10.72.13.91
Start Time:   Mon, 11 Oct 2021 16:35:58 +0200
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-revision-hash=79d5b86f4d
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.72.13.91
IPs:
  IP:           10.72.13.91
Controlled By:  DaemonSet/ingress-nginx-controller
Containers:
  controller:
    Container ID:  docker://7158e84e4e0f27354585f00d4d49600143264d0298aecbad05bdbb332763d78e
    Image:         harbor.int/ingress-nginx/controller:v1.0.3
    Image ID:      docker-pullable://harbor.int/ingress-nginx/controller@sha256:405b7d6ed237d8d485962e6791ebf70e8f50ef97361dfc6fa7f81ddbcb47d788
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 8443/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --publish-service=ingress-nginx/test
    State:          Running
      Started:      Mon, 11 Oct 2021 16:35:58 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-9wwrs (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-7dv54 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-7dv54:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-7dv54
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age    From                      Message
  ----     ------            ----   ----                      -------
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Normal   Scheduled         4m48s  default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-9wwrs to kubernetes-dev-1-node-3.int
  Normal   Pulled            4m48s  kubelet                   Container image "harbor.int/ingress-nginx/controller:v1.0.3" already present on machine
  Normal   Created           4m48s  kubelet                   Created container controller
  Normal   Started           4m48s  kubelet                   Started container controller
  Normal   RELOAD            4m46s  nginx-ingress-controller  NGINX reload triggered due to a change in configuration


Name:         ingress-nginx-controller-zsl5n
Namespace:    ingress-nginx
Priority:     0
Node:         kubernetes-dev-1-node-2.int/10.72.13.229
Start Time:   Mon, 11 Oct 2021 16:36:06 +0200
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=ingress-nginx
              app.kubernetes.io/name=ingress-nginx
              controller-revision-hash=79d5b86f4d
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.72.13.229
IPs:
  IP:           10.72.13.229
Controlled By:  DaemonSet/ingress-nginx-controller
Containers:
  controller:
    Container ID:  docker://920f2d06dd93ba0ef697ba3a13f53644fb38b342e48ad4e55d020fe9c5a1f25e
    Image:         harbor.int/ingress-nginx/controller:v1.0.3
    Image ID:      docker-pullable://harbor.int/ingress-nginx/controller@sha256:405b7d6ed237d8d485962e6791ebf70e8f50ef97361dfc6fa7f81ddbcb47d788
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    80/TCP, 443/TCP, 8443/TCP
    Args:
      /nginx-ingress-controller
      --election-id=ingress-controller-leader
      --controller-class=k8s.io/ingress-nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --publish-service=ingress-nginx/test
    State:          Running
      Started:      Mon, 11 Oct 2021 16:36:07 +0200
    Ready:          True
    Restart Count:  0
    Requests:
      cpu:      100m
      memory:   90Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       ingress-nginx-controller-zsl5n (v1:metadata.name)
      POD_NAMESPACE:  ingress-nginx (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from ingress-nginx-token-7dv54 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-admission
    Optional:    false
  ingress-nginx-token-7dv54:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  ingress-nginx-token-7dv54
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/network-unavailable:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type     Reason            Age    From                      Message
  ----     ------            ----   ----                      -------
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Warning  FailedScheduling  5m1s   default-scheduler         0/7 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 1 node(s) had taint {node-role.kubernetes.io/etcd: true}, that the pod didn't tolerate, 2 node(s) had taint {node-role.kubernetes.io/controlplane: true}, that the pod didn't tolerate, 3 node(s) didn't match Pod's node affinity.
  Normal   Scheduled         4m40s  default-scheduler         Successfully assigned ingress-nginx/ingress-nginx-controller-zsl5n to kubernetes-dev-1-node-2.int
  Normal   Pulled            4m39s  kubelet                   Container image "harbor.int/ingress-nginx/controller:v1.0.3" already present on machine
  Normal   Created           4m39s  kubelet                   Created container controller
  Normal   Started           4m39s  kubelet                   Started container controller
  Normal   RELOAD            4m37s  nginx-ingress-controller  NGINX reload triggered due to a change in configuration
`kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>`
k describe svc
Name:              ingress-nginx-controller-admission
Namespace:         ingress-nginx
Labels:            app.kubernetes.io/component=controller
                   app.kubernetes.io/instance=ingress-nginx
                   app.kubernetes.io/managed-by=Helm
                   app.kubernetes.io/name=ingress-nginx
                   app.kubernetes.io/version=1.0.3
                   helm.sh/chart=ingress-nginx-4.0.5
Annotations:       <none>
Selector:          app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:              ClusterIP
IP Families:       <none>
IP:                10.43.124.205
IPs:               10.43.124.205
Port:              https-webhook  443/TCP
TargetPort:        webhook/TCP
Endpoints:         10.72.13.272:8443,10.72.13.229:8443,10.72.13.91:8443 + 1 more...
Session Affinity:  None
Events:            <none>


Name:              test
Namespace:         ingress-nginx
Labels:            app=test
Annotations:       <none>
Selector:          app=test
Type:              ClusterIP
IP Families:       <none>
IP:                None
IPs:               None
Session Affinity:  None
Events:            <none>

  • Current state of ingress object, if applicable:
    • kubectl -n <appnnamespace> get all,ing -o wide
k get ing -o wide -A
NAMESPACE              NAME                    CLASS    HOSTS                                            ADDRESS      PORTS     AGE
argocd                 argocd             <none>   argocd.kubernetes-dev-1.int                          80, 443   103d
kube-system            external-dns            <none>   default.kubernetes-dev-1.int                              80        103d
kubernetes-dashboard   dashboard          <none>   dashboard.kubernetes-dev-1.int                       80, 443   90d
c-1                  keycloak           <none>   sso.kubernetes-dev-1.int                       80, 443   94d
tekton-pipelines       tekton-dashboard   <none>   tekton-dashboard.kubernetes-dev-1.int                80, 443   28d
tekton-pipelines       tekton-triggers    <none>   tekton-triggers.kubernetes-dev-1.int                 80, 443   18d

Headless service endpoints:

k get endpoints test -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2021-10-11T14:72:27Z"
  creationTimestamp: "2021-10-11T14:72:27Z"
  labels:
    app: test
    service.kubernetes.io/headless: ""
  name: test
  namespace: ingress-nginx
  resourceVersion: "65044325"
  uid: 772ce1ea-9205-4381-ad94-133432cd5b35
subsets:
- addresses:
  - ip: 10.72.1.272
    nodeName: kubernetes-dev-1-node-1.int
  - ip: 10.72.1.229
    nodeName: kubernetes-dev-1-node-2.int
  - ip: 10.72.1.91
    nodeName: kubernetes-dev-1-node-3.int
  - ip: 10.72.1.92
    nodeName: kubernetes-dev-1-node-4.int

What happened:

Trying to use --publish-service with a headless service in order to configure endpoints manually.

The goal is to manually configure the service endpoints to the public addresses of the Kubernetes nodes (unfortunately nginx ingress controller uses the private IP addresses. The public ones do not seem to be possible to be configured.). nginx ingress controller should dynamically update the ingresses when the service endpoints are changed.

However, with above, the ingress controller logs:

[ingress-nginx-controller-9wwrs] I1011 14:56:31.027017       7 status.go:300] "updating Ingress status" namespace="argocd" ingress="argocd" currentValue=[] newValue=[{IP:None Hostname: Ports:[]}]
[ingress-nginx-controller-9wwrs] I1011 14:56:31.029669       7 status.go:300] "updating Ingress status" namespace="c-1" ingress="keycloak" currentValue=[] newValue=[{IP:None Hostname: Ports:[]}]
[ingress-nginx-controller-9wwrs] I1011 14:56:31.033452       7 status.go:300] "updating Ingress status" namespace="tekton-pipelines" ingress="tekton-dashboard" currentValue=[] newValue=[{IP:None Hostname: Ports:[]}]
[ingress-nginx-controller-9wwrs] I1011 14:56:31.033848       7 status.go:300] "updating Ingress status" namespace="kubernetes-dashboard" ingress="dashboard" currentValue=[] newValue=[{IP:None Hostname: Ports:[]}]
[ingress-nginx-controller-9wwrs] W1011 14:56:31.034404       7 status.go:304] error updating ingress rule: Ingress.extensions "argocd" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: "None": must be a valid IP address
[ingress-nginx-controller-9wwrs] I1011 14:56:31.035773       7 status.go:300] "updating Ingress status" namespace="tekton-pipelines" ingress="tekton-triggers" currentValue=[] newValue=[{IP:None Hostname: Ports:[]}]
[ingress-nginx-controller-9wwrs] W1011 14:56:31.035984       7 status.go:304] error updating ingress rule: Ingress.extensions "keycloak" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: "None": must be a valid IP address
[ingress-nginx-controller-9wwrs] W1011 14:56:31.628760       7 status.go:304] error updating ingress rule: Ingress.extensions "tekton-dashboard" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: "None": must be a valid IP address
[ingress-nginx-controller-9wwrs] W1011 14:56:31.829694       7 status.go:304] error updating ingress rule: Ingress.extensions "dashboard" is invalid: status.loadBalancer.ingress[0].ip: Invalid value: "None": must be a valid IP address

What you expected to happen:

nginx ingress controller should take the endpoints specific for the configured service and use the addresses from there to populate the loadbalancer ingress fields of the ingresses.

--public-service is documented to “When used together with update-status, the controller mirrors the address of this service’s endpoints to the load-balancer status of all Ingress objects it satisfies.” here: https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/

nginx ingress controller does not seem to recognize this.

How to reproduce it:

About this issue

  • Original URL
  • State: open
  • Created 3 years ago
  • Reactions: 1
  • Comments: 19 (6 by maintainers)

Most upvoted comments

Let me understand your scenario.

/assign