ingress-nginx: Http -> https redirect on TCP ELB terminating ssl, results in a 308 redirect loop.

What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):

Issues #2000 an #1957 touch on this, with #1957 suggesting its was fixed. Searched 308, redirect, TCP, aws, elb, proxy etc.

NGINX Ingress controller version: v0.16.2

Kubernetes version (use kubectl version): v1.9.6

Environment: AWS

What happened:

With this ingress that creates an ELB handling TLS termination.

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
    service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "#snip"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
  labels:
    k8s-addon: ingress-nginx.addons.k8s.io
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: http
  - name: http
    port: 80
    protocol: TCP
    targetPort: http
  selector:
    app: ingress-nginx
  type: LoadBalancer

And these nginx settings asking for force-ssl-redirect

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
data:
  client-body-buffer-size: 32M
  hsts: "true"
  proxy-body-size: 1G
  proxy-buffering: "off"
  proxy-read-timeout: "600"
  proxy-send-timeout: "600"
  server-tokens: "false"
  force-ssl-redirect: "true"
  upstream-keepalive-connections: "50"
  use-proxy-protocol: "true"

requesting http://example.com will result in a 308 redirect loop. with force-ssl-redirect: false it works fine, but no http -> https redirect.

What you expected to happen:

I expect http://example.com to be redirected to https://example.com by the ingress controller.

How to reproduce it (as minimally and precisely as possible):

Spin up an example with the settings above, default backend, ACM cert and dummy Ingress for it to attach to. then attempt to request the http:// emdpoint.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 24
  • Comments: 71 (5 by maintainers)

Commits related to this issue

Most upvoted comments

I gave the workaround in the Stack Overflow post a try and got it working as well. I’ll try and point out the differences you have to make for the workaround to make it a bit more clear and why it works.

I’ll start with the configuration of the service/load balancer/ELB:

---
kind: Service
apiVersion: v1
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  annotations:
    # Enable PROXY protocol
    service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
    # Specify SSL certificate to use
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:[...]
    # Use SSL on the HTTPS port
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
  type: LoadBalancer
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  ports:
    - name: http
      port: 80
      # We are using a target port of 8080 here instead of 80, this is to work around
      # https://github.com/kubernetes/ingress-nginx/issues/2724
      # This goes together with the `http-snippet` in the ConfigMap.
      targetPort: 8080
    - name: https
      port: 443
      targetPort: http

Three things to point out here:

  1. We enable the PROXY protocol on the ELB by setting the service.beta.kubernetes.io/aws-load-balancer-proxy-protocol annotation.
  2. The ELB is configured to use the SSL certificate on port 443 (HTTPS).
  3. The non-SSL/HTTPS traffic is sent to port 8080 on Nginx instead of the default port 80. This allows us to differentiate between the traffic which was sent encrypted and the traffic which wasn’t in Nginx, as the PROXY protocol doesn’t allow the ELB to pass a X-Forwarded-Proto header with the requests.

In the nginx-configuration ConfigMap I ended up with this:

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
data:
  use-proxy-protocol: "true"

  # Work around for HTTP->HTTPS redirect not working when using the PROXY protocol:
  # https://github.com/kubernetes/ingress-nginx/issues/2724
  # It works by getting Nginx to listen on port 8080 on top of the standard 80 and 443,
  # and making any requests sent to port 8080 be reponded do by this code, rather than
  # the normal port 80 handling.
  ssl-redirect: "false"
  http-snippet: |
    map true $pass_access_scheme {
      default "https";
    }
    map true $pass_port {
      default 443;
    }

    server {
      listen 8080 proxy_protocol;
      return 308 https://$host$request_uri;
    }

This does the following:

  1. Enables the PROXY protocol with the use-proxy-protocol line.
  2. Turns off the HTTP->HTTPS redirect globally. This is because Nginx otherwise thinks all traffic received on port 80 is made over HTTP, and will then try to redirect it to HTTPS. This is what causes the redirect loop.
  3. The http-snippet contains two bits of Nginx configuration. The map statement is used to overrule the value that $pass_access_scheme otherwise get set here: https://github.com/kubernetes/ingress-nginx/blob/da32401c665c646954f79b61e9aa60ac562eb7b7/rootfs/etc/nginx/template/nginx.tmpl#L290-L294 This was necessary for me as some applications behind the ingress controller needed to know if they were served over HTTP or HTTPS - either so they could enforce being served over HTTPS, or in order to be able to generate correct URLs for links and assets. The map configured in the http-snippet is injected further down in the Nginx configuration, and tricks Nginx into thinking all connections were made over HTTPS. The server directive sets up Nginx to listen on port 8080 as well as port 80, and any request made to that port will receive a 308 (Permanent Redirect) response, forwarding them to the HTTPS version of the URL.

An extra thing I changed which wasn’t mentioned on Stack Overflow, was that I changed the ports section of the Deployment from this:

ports:
  - name: http
    containerPort: 80
  - name: https
    containerPort: 443

to this:

ports:
  - name: http
    containerPort: 80
  - name: http-workaround
    containerPort: 8080

This is in order to make the ports the Kubernetes pod accepts connections on to match what we need.


I hope this is useful for other people. I don’t know if it would be worth adding to the documentation somewhere or if it could inspire a more slick workaround.

I have a solution for NLB

ingress-nginx value

controller:
  config:
    ssl-redirect: "false" # we use `special` port to control ssl redirection 
    server-snippet: |
      listen 8000;
  containerPort:
    http: 80
    https: 443
    special: 8000
  service:
    targetPorts:
      http: http
      https: special

    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "your-arn"
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"

And add this annotation to your app

ingress:
  annotations:
    nginx.ingress.kubernetes.io/server-snippet: |
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }

This will create port 8000 on nginx pod and service will use this port for https request. On server-snippet just check if port is 80 which also 80 on NLB, it will response status 308 and if other ports, it will do nothing

I think the suggestion above misses the point a bit (at least for my use case) as it will mean you cannot use web sockets and gRPC when the ELB runs in HTTP mode. It will have to run in TCP/SSL mode (ideally with the proxy protocol) in order for those features to be supported.

Hi folks, still experiencing this issue in 0.17.1 it seems

Using the latest helm chart here is what I did to make it work

controller:
  config:
    ssl-redirect: "false" # we use `special` port to control ssl redirection
    server-snippet: |
      listen 8000;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
  containerPort:
    http: 80
    https: 443
    special: 8000
  service:
    targetPorts:
      http: http
      https: special

    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE>"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'

I agree with Tenzer, I am also trying to enable force ssl redirect while using websockets and get a 308 redirect loop when enabled. Currently I cannot enable ssl redirect until there is a fix for this. If anyone has a suggestion please let me know.

@walkafwalka, ran into the same issue as you with apps which depend on X-Forwarded-Port. Solution below sets proxy_port instead of server_port which comes by default. In my case Jenkins with Keycloak redirection had port 8000. This solved it:

location-snippet: |
  set $pass_server_port    $proxy_port;
server-snippet: |
  listen 8000;
  if ( $server_port = 80 ) {
    return 308 https://$host$request_uri;
  }
ssl-redirect: "false"

Same problem. I have to use WebSocket, so I’m not able to use HTTP and HTTPS in ELB ports, only TCP.

Ok, I managed to make it work (using the nginx-ingress helm chart).

If anyone’s curious what helm chart values you need, here’s what I have, based on @Tenzer comment:

controller:
  service:
    targetPorts:
      https: http
      http: 9000
    
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ ADD SSL CERT HERE }}

  config:

    use-proxy-protocol: "true"

    # Work around for HTTP->HTTPS redirect not working when using the PROXY protocol:
    # https://github.com/kubernetes/ingress-nginx/issues/2724
    # It works by getting Nginx to listen on port 9000 on top of the standard 80 and 443,
    # and making any requests sent to port 9000 be reponded do by this code, rather than
    # the normal port 80 handling.
    ssl-redirect: "false"
    http-snippet: |
      map true $pass_access_scheme {
        default "https";
      }
      map true $pass_port {
        default 443;
      }

      server {
        listen 9000 proxy_protocol;
        return 307 https://$host$request_uri;
      }

Same situation SSL terminating at ELB using ACM cert.

After thinking about this over the weekend I got it to work this morning. I had my ELB setup to the wrong protocol, I had it set to TCP and SSL… It needs to be HTTP and HTTPS.

So…

Make sure the ELB is set to load balance the HTTP and HTTPS protocols, not SSL or TCP, etc…

Double check that both HTTP and HTTPS balance to the same internal port. Set your SSL Cert on your HTTPS 443 Load Balancer Port

in your nginx configmap:

proxy proto : false force-ssl-redirect: true

Example below:

“apiVersion”: “v1”, “metadata”: { “name”: “nginx-configuration”, “namespace”: “ingress-nginx”, “selfLink”: “/api/v1/namespaces/ingress-nginx/configmaps/nginx-configuration”, “uid”: “c8eddbd7-a17a-11e8-a3e5-12ca8f067004”, “resourceVersion”: “1265268”, “creationTimestamp”: “2018-08-16T17:35:36Z”, “labels”: { “app”: “ingress-nginx” } }, “data”: { “client-body-buffer-size”: “32M”, “force-ssl-redirect”: “true”, “hsts”: “true”, “proxy-body-size”: “1G”, “proxy-buffering”: “off”, “proxy-read-timeout”: “600”, “proxy-send-timeout”: “600”, “redirect-to-https”: “true”, “server-tokens”: “false”, “ssl-redirect”: “true”, “upstream-keepalive-connections”: “50”, “use-proxy-protocol”: “false” } }

I opened a PR and put my solution to nginx ingress controller helm chart. So it will be general available for everyone 😃

@trueinviso Thank you! I have a similar setup (terminating TLS at the load balancer) and reverting from 0.22.0 to 0.21.0 fixed the infinite redirect loop for me.

@StepanKuksenko I’m no longer using nginx controllers for years. Hope this description is still valid.

why we even need 8000 port ?

Because we need 80 to handle HTTP and response 308. And 8000 is needed to handle HTTPS. The 443 cannot be used because it is used by nginx to terminate TLS. This case we want to terminate TLS on NLB.

             ┌─────┐           ┌─────────┐        ┌───────┐
          ┌──┴─┐   │       ┌───┴┐        │    ┌───┴─┐     │
───http──▶│:80 │───┼─http─▶│:80 │────────┼───▶│ :80 │     │
          └──┬─┘   │       └───┬┘        │    └───┬─┘     │
             │     │           │         │    ┌───┴─┐     │
             │ NLB │           │ Service │    │:443 │Pod  │
             │     │           │         │    └───┬─┘     │
          ┌──┴─┐   │       ┌───┴┐        │    ┌───┴─┐     │
───https─▶│:443│───┼─http─▶│:443│────────┼───▶│:8000│     │
          └──┬─┘   │       └───┬┘        │    └───┬─┘     │
             └─────┘           └─────────┘        └───────┘

also why we need this “tcp”

Because it is a spec. It accepts only “ssl” or “tcp”

Thanks @KongZ it works fine with NLB Here is the changes I did for the one who does not use charm. I deployed ingress-nginx from https://github.com/kubernetes/ingress-nginx/blob/controller-v0.34.1/deploy/static/provider/aws/deploy.yaml

  1. Edit Configmap with kubectl edit configmaps -n ingress-nginx ingress-nginx-controller Add the following lines (Note: data section does not exist by default)
data:
  server-snippet: |
    listen 8000;
  ssl-redirect: "false"

Complete configmap as a reference:

apiVersion: v1
data:
  server-snippet: |
    listen 8000;
  ssl-redirect: "false"
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":null,"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/i
nstance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.34.1","helm.sh/chart":"
ingress-nginx-2.11.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
  creationTimestamp: "2020-08-03T17:29:25Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/version: 0.34.1
    helm.sh/chart: ingress-nginx-2.11.1
  name: ingress-nginx-controller
  namespace: ingress-nginx

  1. Edit ingress-nginx deployment

kubectl edit deployments -n ingress-nginx ingress-nginx-controller

Add the following lines in ports: section

 - containerPort: 8000
     name: special
     protocol: TCP

More lines from deployments.

livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8000
          name: special
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP

When you save&exit deployments, it will create new ingress-nginx pod.

finally, add the following annotations lines into your app ingress

nginx.ingress.kubernetes.io/server-snippet: |
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }

Complete app ingress yml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: apple-ingress
  namespace: apple
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/server-snippet: |
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }
spec:
  rules:
  - host: apple.mydomain.com
    http:
      paths:
        - path: /
          backend:
            serviceName: apple-service
            servicePort: 5678
  

then update it kubectl apply -f ingress-apple.yml

And let’s test it

$ curl -I http://apple.mydomain.com
HTTP/1.1 308 Permanent Redirect
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:47:59 GMT
Content-Type: text/html
Content-Length: 171
Connection: keep-alive
Location: https://apple.mydomain.com/



$  curl -I https://apple.mydomain.com
HTTP/1.1 200 OK
Server: nginx/1.19.1
Date: Tue, 04 Aug 2020 07:48:20 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 15
Connection: keep-alive
X-App-Name: http-echo
X-App-Version: 0.2.3

Thank you @KongZ for your suggestion. I will provide some more guidance for people coming across this and more options as I had a chance to take a thorough look at the code.

There are two choices for load balancers, at least when it comes to AWS. I am assuming you want to terminate TLS at the load balancer level and we’re dealing strictly with HTTPS & HTTP. If you are interested in TCP, UDP then please check this insightful comment on this very issue.

ELB

ELB (although Classis and will be completely deprecated at some point), probably for historical reasons, actually forwards the X-Forwarded-* headers.

The NGINX controller actually supports and can do redirection based on those headers. Here’s how your configuration would look like with Helm:

  • The controller part
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: <RELEASE_NAME>
  namespace: <NAMESPACE>
spec:
  chart:
    repository: https://kubernetes.github.io/ingress-nginx/
    name: ingress-nginx
    version: 2.11.1
  values:
    config:
      ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
      use-forwarded-headers: "true" # NGINX will now decide whether it will do redirection based on these headers
    controller:
      service:
        targetPorts:
          https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          service.beta.kubernetes.io/aws-load-balancer-type: "elb"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
  • A sample ingress based on this controller definition:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: <INGRESS_NAME>
  namespace: <NAMESPACE>
spec:
  rules:
    - host: <CUSTOM_HOST>
      http:
        paths:
          - path: /
            backend:
              serviceName: <WHATEVER>
              servicePort: <SOME_PORT>

NLB

There are two choices when it comes to NLBs. Unfortunately, at least from my point of view, the preferred option isn’t available at the time of this writing because of this open issue

My preferred option (Not possible until this is resolved)

  • The controller part
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: <RELEASE_NAME>
  namespace: <NAMESPACE>
spec:
  chart:
    repository: https://kubernetes.github.io/ingress-nginx/
    name: ingress-nginx
    version: 2.11.1
  values:
    config:
      ssl-redirect: "false" # We don't need this as NGINX isn't using any TLS certificates itself
      use-proxy-protocol: "true" # NGINX will now decide whether it will do redirection based on these headers
    controller:
      service:
        targetPorts:
          https: http # NGINX will never get HTTPS traffic, TLS is handled by load balancer
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<CERTIFICATE_ARN>"
          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
          service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
          service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
          service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
  • An example ingress
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
  name: <INGRESS_NAME>
  namespace: <NAMESPACE>
spec:
  rules:
    - host: <CUSTOM_HOST>
      http:
        paths:
          - path: /
            backend:
              serviceName: <WHATEVER>
              servicePort: <SOME_PORT>

Workaround by manipulating ports

Please check @KongZ comment on this issue.

@ssh2n Local or Cluster are not matter for ssl-redirection. If you want all services to have ssl-redirection, you just put this on server-snippet

      listen 8000;
      if ( $server_port = 80 ) {
         return 308 https://$host$request_uri;
      }

But if you prefer to select which services are required ssl-redirection, then you need only

      listen 8000;

And leave the 308 redirection to nginx.ingress.kubernetes.io/server-snippet annotation

controller.config.server-snippet will add config to all nginx server while nginx.ingress.kubernetes.io/server-snippet annotation will add to only annotated server

I’ve fixed it using this answer: https://stackoverflow.com/a/51936678/2956620

Looks like a crutch, but it works 😃

This also happens to me on 0.19 when we have an ELB on TCP to use with web-sockets it results in redirect loop similar to @Tenzer @dthomason and @okgolove .

Let me share my settings that finally work. "redirect-to-https": "true", does not seem to be needed. Thank you @boxofnotgoodery .

In ConfigMap:

data:
  client-body-buffer-size: 32M
  hsts: "true"
  proxy-body-size: 1G
  proxy-buffering: "off"
  proxy-read-timeout: "600"
  proxy-send-timeout: "600"
  server-tokens: "false"
  ssl-redirect: "true"
  force-ssl-redirect: "true"
  upstream-keepalive-connections: "50"
  use-proxy-protocol: "false"

Also in Service:

  annotations:
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http

@abjrcode It is on my answer. It is the complete solution. Just configure ingress-nginx value file and your app ingress according to my coment.

https://github.com/kubernetes/ingress-nginx/issues/2724#issuecomment-593769295

Finally this worked for me 1.27.0 chart. Thanks to everyone here, it was a mix of everything from above…

## nginx configuration
## Ref: https://github.com/helm/charts/tree/master/stable/nginx-ingress

controller:
  ingressClass: "<class>"
  replicaCount: 1
  containerPort:
    http: 80
    https: 443
    special: 8000
  config:
    ssl-redirect: "false"
    force-ssl-redirect: "false"
    use-proxy-protocol: "true"
    server-snippet: |
      listen 8000;
    http-snippet: |
      server {
        listen 8000 proxy_protocol;
        return 307 https://$host$request_uri;
      }
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
    type: RollingUpdate
  service:
    targetPorts:
      http: http
      https: special
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "<arn>"
      service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: <policy_id>
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'  ## To let loadbalancer know, so It sends the source IP address to ingress.
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'
    type: "LoadBalancer"
  extraArgs:
    enable-ssl-chain-completion: "false"
    v: 1
## Enable RBAC as per https://github.com/kubernetes/ingress/tree/master/examples/rbac/nginx and https://github.com/kubernetes/ingress/issues/266
rbac:
  create: true
serviceAccount:
  create: true

That would unfortunately not help in this case as the AWS ELB doesn’t generate any headers when the PROXY protocol is in use.

I’ve tried to upgrade an Nginx ingress controller to both version 0.24.0, 0.24.1 and 0.25.0 and from what I can see the problem is the X-Forwarded-Port and X-Forwarded-Proto are respectively set to “80” and “http”, meaning the backend server may think (if it checks these) that the request was served over HTTP, when it actually reached the AWS ELB over HTTPS. This is what the following code block in the original work around was fixing:

  http-snippet: |
    map true $pass_access_scheme {
      default "https";
    }
    map true $pass_port {
      default 443;
    }

This work around doesn’t work any more as the two maps aren’t used further down in the generated config file. Each server {} block instead has a list of variables which are set based on variables provided by Nginx: https://github.com/kubernetes/ingress-nginx/blob/28cc3bb5e2f147d79f2fa7852838afbe9974a020/rootfs/etc/nginx/template/nginx.tmpl#L816-L820 These are then used inside the location / {} block to set the headers sent to the backend: https://github.com/kubernetes/ingress-nginx/blob/28cc3bb5e2f147d79f2fa7852838afbe9974a020/rootfs/etc/nginx/template/nginx.tmpl#L1205-L1206

I’ve tried various ways to attempt to change the value of these headers so the port number instead if 443 and protocol is “https” but to no avail:

  • location-snippet in the ConfigMap: this is the most promising but I’ve only been able to append values onto the headers, not replace them.
  • server-snippet in the ConfigMap.
  • http-snippet in the ConfigMap.
  • nginx.ingress.kubernetes.io/server-snippet in the Ingress definition.

I have both tried to set the $pass_port and $pass_access_scheme variables to other values, used proxy_set_header to send other values to the backend and even more_set_input_headers from OpenResty: https://github.com/openresty/headers-more-nginx-module. None of them seems to have any effect on the passed headers which seem odd to me.

In a test Nginx instance I tried to create a minimal configuration to create a test case for this, but I haven’t been able to reproduce it:

events {}

http {
    server {
        listen 8081;

        set $pass_access_scheme $scheme;

        # Position 1
        location / {
            # Position 2
            set $pass_access_scheme https;
            proxy_set_header X-Forwarded-Proto $pass_access_scheme;
            proxy_pass http://127.0.0.1:8080;
            # Position 3
        }
        # Position 4
    }
}

Regardless of which of the four positions noted by comments I can put in set $pass_access_scheme https; and the backend server will get X-Forwarded-Proto: https sent as a request header.

As long as we haven’t got a way to overrule the X-Forwarded-Port and X-Forwarded-Proto headers sent to the backend, I’m not sure the workaround will work in ingress-nginx versions newer than 0.23.0 😦

I’d be very interested to hear if anybody else can come up with a work around for changing the values of those headers.

I’m getting into 308 loop and in the browser, I’m getting ERR_TOO_MANY_REDIRECTS

thank you @boxofnotgoodery this works for us!