ingress-nginx: nginx /healthz endpoint breaks when using an ingress with nginx.ingress.kubernetes.io/rewrite-target without a host

What happened:

When creating an ingress without a host, and using nginx.ingress.kubernetes.io/rewrite-target: /$2 the /healthz endpoint no longer works it give a 404

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webserver
  namespace: namespace
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/configuration-snippet: |
      more_set_input_headers "x-request-start: t=$msec";
spec:
  rules:
    - http:
        paths:
          - path: /webserver(/|$)(.*)
            pathType: Prefix
            backend:
              service:
                name: webserver
                port:
                  number: 8000
❯ curl -Is https://clusterdns.name/healthz
HTTP/1.1 404 Not Found

What you expected to happen:

I would expect the default /healthz endpoint to not be affected

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):

-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v1.2.1
  Build:         08848d69e0c83992c89da18e70ea708752f21d7a
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.10

-------------------------------------------------------------------------------

Kubernetes version (use kubectl version):

❯ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"5a97ee6d15525f6e4a1c2646bf1dfd2ebd5220b5", GitTreeState:"clean", BuildDate:"2022-06-15T04:26:33Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: – Azure
  • OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="18.04.6 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.6 LTS"
VERSION_ID="18.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=bionic
UBUNTU_CODENAME=bionic
  • Kernel (e.g. uname -a):
Linux aks-pool1az0-16738398-vmss00001A 5.4.0-1083-azure #87~18.04.1-Ubuntu SMP Fri Jun 3 13:19:07 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
    • Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
      • terraform. AKS
  • Basic cluster related info:
    • kubectl version
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"5a97ee6d15525f6e4a1c2646bf1dfd2ebd5220b5", GitTreeState:"clean", BuildDate:"2022-06-15T04:26:33Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
  • kubectl get nodes -o wide
❯ kubectl get nodes -o wide
NAME                               STATUS   ROLES   AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
aks-pool0-34788694-vmss000000      Ready    agent   21d   v1.23.5   10.101.64.4     <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
aks-pool1az0-16738398-vmss00001a   Ready    agent   21d   v1.23.5   10.101.65.246   <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
aks-pool1az0-16738398-vmss00001y   Ready    agent   21d   v1.23.5   10.101.64.253   <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
aks-pool1az0-16738398-vmss00002b   Ready    agent   21d   v1.23.5   10.101.67.232   <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
aks-pool1az0-16738398-vmss00002c   Ready    agent   21d   v1.23.5   10.101.70.211   <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
aks-pool1az0-16738398-vmss00002h   Ready    agent   21d   v1.23.5   10.101.66.239   <none>        Ubuntu 18.04.6 LTS   5.4.0-1083-azure   containerd://1.5.11+azure-2
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress
    • If helm was used then please show output of helm -n <ingresscontrollernamepspace> get values <helmreleasename>
    • If helm was not used, then copy/paste the complete precise command used to install the controller, along with the flags and options used
    • if you have more than one instance of the ingress-nginx-controller installed in the same cluster, please provide details for all the instances it was installed using kustomize flags
apiVersion: v1
data:
  brotli-level: "4"
  disable-access-log: "false"
  disable-ipv6: "true"
  disable-ipv6-dns: "true"
  enable-brotli: "true"
  enable-ocsp: "true"
  enable-opentracing: "true"
  gzip-level: "2"
  hsts: "true"
  hsts-include-subdomains: "true"
  hsts-max-age: "31556952"
  jaeger-endpoint: http://otel-collector.ingress-nginx.svc.cluster.local:14268/api/traces
  jaeger-propagation-format: w3c
  load-balance: ewma
  log-format-escape-json: "true"
  log-format-upstream: '{\"time\":\"$time_iso8601\",\"remote_addr\":\"$proxy_protocol_addr\",\"x_forward_for\":\"$proxy_add_x_forwarded_for\",\"request_id\":\"$req_id\",\"ingress\":\"$namespace/$ingress_name\",\"bytes_sent\":\"$bytes_sent\",\"upstream\":\"$proxy_upstream_name\",\"upstream_server\":\"$upstream_addr\",\"status\":\"$status\",\"vhost\":\"$host\",\"request_proto\":\"$server_protocol\",\"path\":\"$uri\",\"request_query\":\"$args\",\"request_length\":\"$request_length\",\"duration\":\"$request_time\",\"method\":\"$request_method\",\"http_referrer\":\"$http_referer\",\"http_user_agent\":\"$http_user_agent\",\"message\":\"$request_method
    $host$uri $server_protocol $status\"}'
  nginx-status-ipv4-whitelist: 10.101.64.0/18
  proxy-body-size: 20m
  proxy-buffering: "off"
  proxy-connect-timeout: "1"
  proxy-next-upstream: error timeout invalid_header http_502 http_503
  proxy-next-upstream-timeout: "30"
  proxy-next-upstream-tries: "3"
  proxy-read-timeout: "30"
  reuse-port: "true"
  ssl-ciphers: ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384
  ssl-early-data: "true"
  ssl-protocols: TLSv1.3 TLSv1.2
  ssl-redirect: "false"
  ssl-session-cache-size: 100m
  upstream-keepalive-connections: "320"
  upstream-keepalive-requests: "10000"
  upstream-keepalive-timeout: "60"
  use-geoip: "false"
  use-geoip2: "false"
  use-gzip: "true"
  use-http2: "true"
  worker-processes: "3"
  worker-shutdown-timeout: 240s
kind: ConfigMap
metadata:
  labels:
    app: nginx-ingress-controller
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.2.1
  name: nginx-configuration
  namespace: ingress-nginx

deployment flags

    containers:
      - args:
        - /nginx-ingress-controller
        - --watch-ingress-without-class=true
        - --configmap=ingress-nginx/nginx-configuration
        - --annotations-prefix=nginx.ingress.kubernetes.io
        - --profiling=false
        - --publish-service=ingress-nginx/ingress-nginx
        - --report-node-internal-ip-address
        - --update-status
        - --update-status-on-shutdown
        - --default-ssl-certificate=ingress-nginx/tls-secret
        - --disable-catch-all
  • Current State of the controller:
    • kubectl describe ingressclasses
❯ kubectl describe ingressclasses
Name:         ingress-nginx
Labels:       app=nginx-ingress-controller
              app.kubernetes.io/component=controller
              app.kubernetes.io/instance=nginx-ingress-controller
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.2.1
Annotations:  <none>
Controller:   k8s.io/ingress-nginx
Events:       <none>


**How to reproduce this issue**:

curl the nginx health check to show it gives a 200

deploy an ingress without a host and setup `nginx.ingress.kubernetes.io/rewrite-target: /$2`
verify you can hit the application ingress
curl the nginx health check and see a 404

**Anything else we need to know**:

<!-- If this is actually about documentation, uncomment the following block -->

<!-- 
/kind documentation
/remove-kind bug
-->

About this issue

  • Original URL
  • State: open
  • Created 2 years ago
  • Reactions: 8
  • Comments: 36 (10 by maintainers)

Most upvoted comments

@strongjz @longwuyuan I am able to reproduce this issue.
Root Cause: When path is specified in Ingress without host and rewrite annotation is specified, all the paths will go under server_name _ and location ~* "^/" get added just before the /healthz which causes 404 Not found issue.

We just experienced the same issue after upgrading our ingress nginx controller to 1.8.0 (via Helm chart 4.7.0). One of our ingress controllers had rewrites configured without a host and after upgrading it effectively broke all traffic to that ingress controller as healh probe started to respond with 404 which in effect made our cloud load balancer to think all our nodes are unhealthy.

This must have been working correctly previously as we ran this setup for a long time on older version of ingress nginx controller.

Wait what. Dont close it as we want the initial bug to be solved.

Oh, yeah @ubbeK , sorry, my fault. Did not want to put this while texting the answer. Wanted just to close a dialogue between me and @longwuyuan, since for now, this workaround is fine.

The health probes on our load balancer began working after adding this annotation:

controller.service.annotations.service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path=/healthz

and this setting:

defaultBackend.Enabled=true

to our ingress controllers