ingress-nginx: NGINX ingress having SSL errors and increased latency

NGINX Ingress controller version:

NGINX Ingress controller
  Release:       v0.46.0
  Build:         6348dde672588d5495f70ec77257c230dc8da134
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.6

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T21:04:39Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19+", GitVersion:"v1.19.13-eks-8df270", GitCommit:"8df2700a72a2598fa3a67c05126fa158fd839620", GitTreeState:"clean", BuildDate:"2021-07-31T01:36:57Z", GoVersion:"go1.15.14", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS EKS 1.19, platform version eks.7
  • OS (e.g. from /etc/os-release): Amazon Linux 2
  • Kernel (e.g. uname -a): 5.4.156-83.273.amzn2.x86_64
  • Install tools: eksctl
  • Basic cluster related info:
    • kubectl version: v1.21.3
    • kubectl get nodes -o wide:
    NAME             STATUS   ROLES    AGE   VERSION               INTERNAL-IP     EXTERNAL-IP   OS-IMAGE         KERNEL-VERSION                CONTAINER-RUNTIME
    <Node Name>      Ready    <none>   12d   v1.19.15-eks-9c63c4   10.0.0.50       <none>        Amazon Linux 2   5.4.156-83.273.amzn2.x86_64   docker://20.10.7
    
  • How was the ingress-nginx-controller installed: Helm was used to install the nginx-ingress-controller deployment (Chart version 3.31.0), with the values of:
values.yaml

ingress-nginx:
  defaultBackend:
    enabled: true
  controller:
    priorityClassName: high
    service:
      externalTrafficPolicy: Cluster
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
        service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
        external-dns.alpha.kubernetes.io/ttl: '300'
    livenessProbe:
      initialDelaySeconds: 30
      timeoutSeconds: 10
    readinessProbe:
      initialDelaySeconds: 30
      timeoutSeconds: 10
    replicaCount: 3
    extraArgs:
      default-ssl-certificate: ingress/mysoluto.com
    resources:
      requests:
        cpu: 1
        memory: 1.5G
      limits:
        cpu: 2
        memory: 2G
    config:
      variables-hash-bucket-size: "256"
      http2-max-field-size: 8k
      large-client-header-buffers: 4 12k
      proxy-buffer-size: 32k
      vts-status-zone-size: 20m
      disable-access-log: 'false'
      ssl-ciphers: ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS;
      hide-headers: Server,X-Powered-By,X-AspNet-Version,X-AspNet-Mvc-Version,x-envoy-upstream-service-time,x-envoy-decorator-operation
      server-tokens: 'False'
      ssl-protocols: TLSv1.3 TLSv1.2
      limit-req-status-code: 429
      log-format-escape-json: 'true'
      log-format-upstream: '{ "type": "access_logs", "ssl_protocl": "$ssl_protocol",
        "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr","x-forward-for":
        "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user":"$remote_user",
        "bytes_sent": $bytes_sent, "request_time": $request_time, "status":"$status",
        "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri","request_query":
        "$args", "request_length": $request_length, "duration": $request_time,"method":
        "$request_method", "http_referrer": "$http_referer", "http_user_agent":"$http_user_agent",
        "upstream": "$upstream_addr", "upstream_status": "$upstream_status", "upstream_latency":
        "$upstream_response_time", "ingress": "$ingress_name", "namespace": "$namespace"
        }'
      limit-conn-status-code: '429'
      worker-shutdown-timeout: 240s
      use-proxy-protocol: 'true'
      server-snippet: |
        location = /metrics {
          return 401;
        }
        location = /env {
          return 401;
        }
      http-snippet: |
        map $geoip_country_code $external_dashboard_allowed_country {
          default no;
          IL yes;
        }
    metrics:
      enabled: true
      service:
        labels:
          ingress-nginx.scrape: "true"
    autoscaling:
      enabled: true
      minReplicas: 3
      maxReplicas: 20
      targetCPUUtilizationPercentage: 60
      targetMemoryUtilizationPercentage: 81
    admissionWebhooks:
      enabled: true
      patch:
        enabled: true

  • Current State of the controller:
<bold>Ingress Controller Deployment</bold>

Name:                   ingress-nginx-controller
Namespace:              ingress
CreationTimestamp:      Sun, 12 Dec 2021 19:23:12 +0200
Labels:                 app.kubernetes.io/component=controller
                        app.kubernetes.io/instance=ingress-nginx
                        app.kubernetes.io/managed-by=Helm
                        app.kubernetes.io/name=ingress-nginx
                        app.kubernetes.io/version=0.50.0
                        argocd.argoproj.io/instance=ingress-nginx
                        helm.sh/chart=ingress-nginx-3.40.0
Annotations:            deployment.kubernetes.io/revision: 7
Selector:               app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:           app.kubernetes.io/component=controller
                    app.kubernetes.io/instance=ingress-nginx
                    app.kubernetes.io/name=ingress-nginx
  Annotations:      kubectl.kubernetes.io/restartedAt: 2021-12-22T19:35:40+02:00
  Service Account:  ingress-nginx
  Containers:
   controller:
    Image:       k8s.gcr.io/ingress-nginx/controller:v0.50.0@sha256:f46fc2d161c97a9d950635acb86fb3f8d4adcfb03ee241ea89c6cde16aa3fdf8
    Ports:       80/TCP, 443/TCP, 10254/TCP, 8443/TCP
    Host Ports:  0/TCP, 0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
      --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
      --election-id=ingress-controller-leader
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
      --default-ssl-certificate=ingress/mysoluto.com
      --v=5
    Limits:
      cpu:     2
      memory:  2G
    Requests:
      cpu:      1
      memory:   1500M
    Liveness:   http-get http://:10254/healthz delay=30s timeout=10s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=30s timeout=10s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:        (v1:metadata.name)
      POD_NAMESPACE:   (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
  Volumes:
   webhook-cert:
    Type:               Secret (a volume populated by a Secret)
    SecretName:         ingress-nginx-admission
    Optional:           false
  Priority Class Name:  high
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   ingress-nginx-controller-8c7c868d8 (3/3 replicas created)
Events:          <none>

<bold>Ingress Controller Service</bold>

Name:                     ingress-nginx-controller
Namespace:                ingress
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=ingress-nginx
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/version=0.50.0
                          argocd.argoproj.io/instance=ingress-nginx
                          helm.sh/chart=ingress-nginx-3.40.0
Annotations:              external-dns.alpha.kubernetes.io/hostname: test-kubernetes-upgrade-with-cilium.nonprod.kube.mysoluto.com
                          external-dns.alpha.kubernetes.io/ttl: 300
                          service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags:
                            NAME=k8s-test-kubernetes-upgrade-with-cilium,BUSINESS_UNIT=PSS,PLATFORM=TLV-DEVOPS,CLIENT=MULTI_TENANT,BUSINESS_REGION=GLOBAL
                          service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true
                          service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Families:              <none>
IP:                       10.100.90.219
IPs:                      <none>
LoadBalancer Ingress:     <elb-host>.us-east-2.elb.amazonaws.com
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31682/TCP
Endpoints:                <3 endpoints>
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32495/TCP
Endpoints:                <3 endpoints>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

<bold>Ingress Resource</bold>

Name:             secure-demo-echo-service
Namespace:        default
Address:          <elb-host>.us-east-2.elb.amazonaws.com
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  mysoluto.com terminates secure-demo-echo-service-load.mysoluto.com
Rules:
  Host                                        Path  Backends
  ----                                        ----  --------
  secure-demo-echo-service-load.mysoluto.com  
                                              /   demo-echo-service:80 (<all backends> + 7 more...)
Annotations:                                  nginx.ingress.kubernetes.io/ssl-redirect: false
Events:                                       <none>

What happened:

I recently upgraded my EKS Kubernetes cluster from 1.18 to 1.19, and started to experience elevated HTTP latency. All of the requests are incoming to the cluster from an entry nginx-ingress-controller. After I noticed we have high latency, I started to test it using both apache bench and loadtest. Both of these resulted in high latency and multiple errors:

Results:

Apache Bench results</summary

Completed 10000 requests
Finished 10000 requests


Server Software:        
Server Hostname:        elb-host.us-east-1.elb.amazonaws.com.us-east-2.elb.amazonaws.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
TLS Server Name:        secure-demo-echo-service-load.mysoluto.com

Document Path:          /
Document Length:        20 bytes

Concurrency Level:      30
Time taken for tests:   2882.171 seconds
Complete requests:      10000
Failed requests:        1369
   (Connect: 0, Receive: 0, Length: 1369, Exceptions: 0)
Total transferred:      1898380 bytes
HTML transferred:       172580 bytes
Requests per second:    3.47 [#/sec] (mean)
Time per request:       8646.513 [ms] (mean)
Time per request:       288.217 [ms] (mean, across all concurrent requests)
Transfer rate:          0.64 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   37  15.1     43      82
Processing:    12 8179 20494.0     14   60748
Waiting:        0   14   1.2     14      39
Total:         52 8216 20479.1     58   60748

Percentage of the requests served within a certain time (ms)
  50%     58
  66%     59
  75%     60
  80%     61
  90%  59588
  95%  59863
  98%  59948
  99%  60000
 100%  60748 (longest request)

I also received lots of SSL handshake failures during the test:

SSL handshake failed (5)

I’ll also add that using the same test against EKS Kubernetes v1.18, the results are normal and with no latency or errors:

EKS Kubernetes v1.18 results

Server Software:        
Server Hostname:        elb-host.us-east-1.elb.amazonaws.com
Server Port:            443
SSL/TLS Protocol:       TLSv1.2,ECDHE-RSA-AES256-GCM-SHA384,2048,256
TLS Server Name:        secure-demo-echo-service-load.mysoluto.com

Document Path:          /
Document Length:        20 bytes

Concurrency Level:      30
Time taken for tests:   6.781 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2200000 bytes
HTML transferred:       200000 bytes
Requests per second:    1474.68 [#/sec] (mean)
Time per request:       20.343 [ms] (mean)
Time per request:       0.678 [ms] (mean, across all concurrent requests)
Transfer rate:          316.82 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        8   15   3.3     15      50
Processing:     2    5   2.9      4      46
Waiting:        2    5   2.7      4      45
Total:         10   20   4.8     19      69

Percentage of the requests served within a certain time (ms)
  50%     19
  66%     21
  75%     22
  80%     23
  90%     25
  95%     28
  98%     33
  99%     38
 100%     69 (longest request)

After reviewing the results, I started to see all of the logs from the nginx-ingress-controller (increased to debug log), and I did see some SSL errors such as: SSL_do_handshake: -1/SSL_get_error: 2/SSL_get_error: 6

Details SSL Errors

2021/12/26 18:37:41 [debug] 3524#3524: *3412126 http check ssl handshake 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 http check ssl handshake 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 http check ssl handshake 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 https ssl handshake: 0x16 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 SSL server name: “<my-dns>” 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 ssl cert: connection reusable: 0 2021/12/26 18:37:41 [debug] 3524#3524: *3412127 code cache lookup (key=‘ssl_certificate_by_lua_nhli_eb62659df6af69ef3ea4b2f4d168e185’, ref=-1) 2021/12/26 18:37:41 [debug] 3524#3524: *3412127 code cache miss (key=‘ssl_certificate_by_lua_nhli_eb62659df6af69ef3ea4b2f4d168e185’, ref=-1) 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 SSL ALPN supported by client: h2 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 SSL ALPN supported by client: http/1.1 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 SSL ALPN selected: h2 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 SSL_do_handshake: -1 2021/12/26 18:37:41 [debug] 3524#3524: *3412126 SSL_get_error: 2 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL handshake handler: 0 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 ssl new session: 5B33F3D8:32:216 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_do_handshake: 1 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL: TLSv1.2, cipher: “ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD” 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: -1 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_get_error: 2 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 27 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 13 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL to write: 40 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_write: 40 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: 24 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: 27 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: 13 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: 56 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: -1 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_get_error: 2 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL to write: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_write: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 95 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 20 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL to write: 133 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_write: 133 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL buf copy: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL to write: 9 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_write: 9 { “type”: “access_logs”, “ssl_protocl”: “TLSv1.2”, “time”: “2021-12-26T18:37:42+00:00”, “remote_addr”: “<remote-addr>”,“x-forward-for”: “<remote-addr>”, “request_id”: “3a231882e6e71870cfbc6491c1f2cf24”, “remote_user”:“”, “bytes_sent”: 142, “request_time”: 0.001, “status”:“200”, “vhost”: “<vhost>”, “request_proto”: “HTTP/2.0”, “path”: “/”,“request_query”: “”, “request_length”: 47, “duration”: 0.001,“method”: “GET”, “http_referrer”: “”, “http_user_agent”:“curl/7.77.0”, “upstream”: “<upstream>”, “upstream_status”: “200”, “upstream_latency”: “0.000”, “ingress”: “<ingress-name>”, “namespace”: “default” } 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_read: 0 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_get_error: 6 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 peer shutdown SSL cleanly 2021/12/26 18:37:42 [debug] 3524#3524: *3412126 SSL_shutdown: 1

I can’t seem to find the problem and why this is happening. I’m using a certificate with an intermediate certificate chain (meaning both my certificate and the intermediate certificate are found in the tls.crt file). I did try it also with a self-signed certificate to eliminate the certificate type/length thing, but it keeps happening.

What you expected to happen:

No SSL Errors and no extra latency from this new cluster, with the same version of nginx-ingress-controller

Might be the upgrade of kubernetes and the upgrade of the kernel version. The good cluster is using a Linux distribution with a lower kernel version: 4.14.248-189.473.amzn2.x86_64

How to reproduce it:

I upgraded from EKS Kubernetes v1.18 to v1.19 and it started happening. I’m also using the nginx-ingress-controller in this repo (helm chart deployment and described above).

This is my ingress resource for testing:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
  name: secure-demo-echo-service
  namespace: default
spec:
  rules:
  - host: secure-demo-echo-service-load.amit.test
    http:
      paths:
      - backend:
          serviceName: demo-echo-service
          servicePort: 80
        path: /
        pathType: ImplementationSpecific
  tls:
  - hosts:
    - secure-demo-echo-service-load.amit.test
    secretName: mysoluto.com

This is my tls secret resource:

apiVersion: v1
data:
  tls.crt: LS0<REDACTED>tLQ==
  tls.key: LS0<REDACTED>Cgo=
kind: Secret
metadata:
  name: mysoluto.com
  namespace: default
type: kubernetes.io/tls
The generated nginx.conf on the ingress controller pod


# Configuration checksum: 422160808234481805

# setup custom paths that do not require root access
pid /tmp/nginx.pid;

daemon off;

worker_processes 2;

worker_rlimit_nofile 523264;

worker_shutdown_timeout 240s ;

events {
	multi_accept        on;
	worker_connections  16384;
	use                 epoll;
}

http {
	lua_package_path "/etc/nginx/lua/?.lua;;";
	
	lua_shared_dict balancer_ewma 10M;
	lua_shared_dict balancer_ewma_last_touched_at 10M;
	lua_shared_dict balancer_ewma_locks 1M;
	lua_shared_dict certificate_data 20M;
	lua_shared_dict certificate_servers 5M;
	lua_shared_dict configuration_data 20M;
	lua_shared_dict global_throttle_cache 10M;
	lua_shared_dict ocsp_response_cache 5M;
	
	init_by_lua_block {
		collectgarbage("collect")
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "lua_ingress")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		lua_ingress = res
		lua_ingress.set_config({
			use_forwarded_headers = false,
			use_proxy_protocol = true,
			is_ssl_passthrough_enabled = false,
			http_redirect_code = 308,
		listen_ports = { ssl_proxy = "442", https = "443" },
			
			hsts = true,
			hsts_max_age = 15724800,
			hsts_include_subdomains = true,
			hsts_preload = false,
			
			global_throttle = {
				memcached = {
					host = "", port = 11211, connect_timeout = 50, max_idle_timeout = 10000, pool_size = 50,
				},
				status_code = 429,
			}
		})
		end
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		configuration.prohibited_localhost_port = '10246'
		end
		
		ok, res = pcall(require, "balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		balancer = res
		end
		
		ok, res = pcall(require, "monitor")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		monitor = res
		end
		
		ok, res = pcall(require, "certificate")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		certificate = res
		certificate.is_ocsp_stapling_enabled = false
		end
		
		ok, res = pcall(require, "plugins")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		plugins = res
		end
		-- load all plugins that'll be used here
	plugins.init({  })
	}
	
	init_worker_by_lua_block {
		lua_ingress.init_worker()
		balancer.init_worker()
		
		monitor.init_worker(10000)
		
		plugins.run()
	}
	
	real_ip_header      proxy_protocol;
	
	real_ip_recursive   on;
	
	set_real_ip_from    0.0.0.0/0;
	
	geoip_country       /etc/nginx/geoip/GeoIP.dat;
	geoip_city          /etc/nginx/geoip/GeoLiteCity.dat;
	geoip_org           /etc/nginx/geoip/GeoIPASNum.dat;
	geoip_proxy_recursive on;
	
	aio                 threads;
	aio_write           on;
	
	tcp_nopush          on;
	tcp_nodelay         on;
	
	log_subrequest      on;
	
	reset_timedout_connection on;
	
	keepalive_timeout  75s;
	keepalive_requests 100;
	
	client_body_temp_path           /tmp/client-body;
	fastcgi_temp_path               /tmp/fastcgi-temp;
	proxy_temp_path                 /tmp/proxy-temp;
	ajp_temp_path                   /tmp/ajp-temp;
	
	client_header_buffer_size       1k;
	client_header_timeout           60s;
	large_client_header_buffers     4 12k;
	client_body_buffer_size         8k;
	client_body_timeout             60s;
	
	http2_max_field_size            8k;
	http2_max_header_size           16k;
	http2_max_requests              1000;
	http2_max_concurrent_streams    128;
	
	types_hash_max_size             2048;
	server_names_hash_max_size      1024;
	server_names_hash_bucket_size   128;
	map_hash_bucket_size            64;
	
	proxy_headers_hash_max_size     512;
	proxy_headers_hash_bucket_size  64;
	
	variables_hash_bucket_size      256;
	variables_hash_max_size         2048;
	
	underscores_in_headers          off;
	ignore_invalid_headers          on;
	
	limit_req_status                429;
	limit_conn_status               429;
	
	include /etc/nginx/mime.types;
	default_type text/html;
	
	# Custom headers for response
	
	server_tokens off;
	
	more_clear_headers Server;
	
	# disable warnings
	uninitialized_variable_warn off;
	
	# Additional available variables:
	# $namespace
	# $ingress_name
	# $service_name
	# $service_port
log_format upstreaminfo escape=json '{ "type": "access_logs", "ssl_protocl": "$ssl_protocol", "time": "$time_iso8601", "remote_addr": "$proxy_protocol_addr","x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user":"$remote_user", "bytes_sent": $bytes_sent, "request_time": $request_time, "status":"$status", "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri","request_query": "$args", "request_length": $request_length, "duration": $request_time,"method": "$request_method", "http_referrer": "$http_referer", "http_user_agent":"$http_user_agent", "upstream": "$upstream_addr", "upstream_status": "$upstream_status", "upstream_latency": "$upstream_response_time", "ingress": "$ingress_name", "namespace": "$namespace" }';
	
	map $request_uri $loggable {
		
		default 1;
	}
	
	access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;
	
	error_log  /var/log/nginx/error.log debug;
	
	resolver 10.100.0.10 valid=30s ipv6=off;
	
	# See https://www.nginx.com/blog/websocket-nginx
	map $http_upgrade $connection_upgrade {
		default          upgrade;
		
		# See http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive
		''               '';
		
	}
	
	# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
	# If no such header is provided, it can provide a random value.
	map $http_x_request_id $req_id {
		default   $http_x_request_id;
		
		""        $request_id;
		
	}
	
	# Create a variable that contains the literal $ character.
	# This works because the geo module will not resolve variables.
	geo $literal_dollar {
		default "$";
	}
	
	server_name_in_redirect off;
	port_in_redirect        off;
	
	ssl_protocols TLSv1.3 TLSv1.2;
	
	ssl_early_data off;
	
	# turn on session caching to drastically improve performance
	
	ssl_session_cache builtin:1000 shared:SSL:10m;
	ssl_session_timeout 10m;
	
	# allow configuring ssl session tickets
	ssl_session_tickets off;
	
	# slightly reduce the time-to-first-byte
	ssl_buffer_size 4k;
	
	# allow configuring custom ssl ciphers
	ssl_ciphers 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS;';
	ssl_prefer_server_ciphers on;
	
	ssl_ecdh_curve auto;
	
	# PEM sha: 98156ccfa5427fceaf47e306ec0740369e0cb9bd
	ssl_certificate     /etc/ingress-controller/ssl/ingress-mysoluto.com.pem;
	ssl_certificate_key /etc/ingress-controller/ssl/ingress-mysoluto.com.pem;
	
	proxy_ssl_session_reuse on;
	
	proxy_hide_header Server;
	proxy_hide_header X-Powered-By;
	proxy_hide_header X-AspNet-Version;
	proxy_hide_header X-AspNet-Mvc-Version;
	proxy_hide_header x-envoy-upstream-service-time;
	proxy_hide_header x-envoy-decorator-operation;
	
	# Custom code snippet configured in the configuration configmap
	map $geoip_country_code $external_dashboard_allowed_country {
		default no;
		IL yes;
	}
	
	upstream upstream_balancer {
		### Attention!!!
		#
		# We no longer create "upstream" section for every backend.
		# Backends are handled dynamically using Lua. If you would like to debug
		# and see what backends ingress-nginx has in its memory you can
		# install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.
		# Once you have the plugin you can use "kubectl ingress-nginx backends" command to
		# inspect current backends.
		#
		###
		
		server 0.0.0.1; # placeholder
		
		balancer_by_lua_block {
			balancer.balance()
		}
		
		keepalive 320;
		
		keepalive_timeout  60s;
		keepalive_requests 10000;
		
	}
	
	# Cache for internal auth checks
	proxy_cache_path /tmp/nginx-cache-auth levels=1:2 keys_zone=auth_cache:10m max_size=128m inactive=30m use_temp_path=off;
	
	# Global filters
	
	## start server _
	server {
		server_name _ ;
		
		listen 80 proxy_protocol default_server reuseport backlog=4096 ;
		listen 443 proxy_protocol default_server reuseport backlog=4096 ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "";
			set $ingress_name   "";
			set $service_name   "";
			set $service_port   "";
			set $location_path  "";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = false,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			access_log off;
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "upstream-default-backend";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# health checks in cloud providers require the use of port 80
		location /healthz {
			
			access_log off;
			return 200;
		}
		
		# this is required to avoid error if nginx is being monitored
		# with an external software (like sysdig)
		location /nginx_status {
			
			allow 127.0.0.1;
			
			deny all;
			
			access_log off;
			stub_status on;
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server _
	
	## start server argocd-badge-test-kubernetes-upgrade-with-cilium.mysoluto.com
	server {
		server_name argocd-badge-test-kubernetes-upgrade-with-cilium.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location /api/webhook/ {
			
			set $namespace      "argocd";
			set $ingress_name   "argocd-server-badge-ingress";
			set $service_name   "argocd-server";
			set $service_port   "80";
			set $location_path  "/api/webhook";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "argocd-argocd-server-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		location = /api/webhook {
			
			set $namespace      "argocd";
			set $ingress_name   "argocd-server-badge-ingress";
			set $service_name   "argocd-server";
			set $service_port   "80";
			set $location_path  "/api/webhook";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "argocd-argocd-server-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		location /api/badge/ {
			
			set $namespace      "argocd";
			set $ingress_name   "argocd-server-badge-ingress";
			set $service_name   "argocd-server";
			set $service_port   "80";
			set $location_path  "/api/badge";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "argocd-argocd-server-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		location = /api/badge {
			
			set $namespace      "argocd";
			set $ingress_name   "argocd-server-badge-ingress";
			set $service_name   "argocd-server";
			set $service_port   "80";
			set $location_path  "/api/badge";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "argocd-argocd-server-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		location / {
			
			set $namespace      "argocd";
			set $ingress_name   "argocd-server-badge-ingress";
			set $service_name   "";
			set $service_port   "";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "upstream-default-backend";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server argocd-badge-test-kubernetes-upgrade-with-cilium.mysoluto.com
	
	## start server k-t.mysoluto.com
	server {
		server_name k-t.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location /oauth2/ {
			
			set $namespace      "external-dashboard";
			set $ingress_name   "kubernetes-dashboard-external-oauth";
			set $service_name   "oauth-proxy-external-name";
			set $service_port   "80";
			set $location_path  "/oauth2";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "external-dashboard-oauth-proxy-external-name-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		location = /oauth2 {
			
			set $namespace      "external-dashboard";
			set $ingress_name   "kubernetes-dashboard-external-oauth";
			set $service_name   "oauth-proxy-external-name";
			set $service_port   "80";
			set $location_path  "/oauth2";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "external-dashboard-oauth-proxy-external-name-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		location = /_external-auth-Lw-Prefix {
			internal;
			
			# ngx_auth_request module overrides variables in the parent request,
			# therefore we have to explicitly set this variable again so that when the parent request
			# resumes it has the correct value set for this variable so that Lua can pick backend correctly
			set $proxy_upstream_name "external-dashboard-kubernetes-dashboard-external-80";
			
			proxy_pass_request_body     off;
			proxy_set_header            Content-Length          "";
			proxy_set_header            X-Forwarded-Proto       "";
			proxy_set_header            X-Request-ID            $req_id;
			
			proxy_set_header            Host                    $host;
			proxy_set_header            X-Original-URL          $scheme://$http_host$request_uri;
			proxy_set_header            X-Original-Method       $request_method;
			proxy_set_header            X-Sent-From             "nginx-ingress-controller";
			proxy_set_header            X-Real-IP               $remote_addr;
			
			proxy_set_header            X-Forwarded-For        $remote_addr;
			
			proxy_set_header            X-Auth-Request-Redirect $request_uri;
			
			proxy_buffering                         off;
			
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_ssl_server_name       on;
			proxy_pass_request_headers  on;
			
			client_max_body_size        1m;
			
			# Pass the extracted client certificate to the auth provider
			
			set $target https://$host/oauth2/auth;
			proxy_pass $target;
		}
		
		location @027d07cc98db980f931043f6d5b9d77628a77e87 {
			internal;
			
			add_header Set-Cookie $auth_cookie;
			
			return 302 https://$host/oauth2/start?rd=$request_uri;
		}
		
		location / {
			
			set $namespace      "external-dashboard";
			set $ingress_name   "kubernetes-dashboard-external";
			set $service_name   "kubernetes-dashboard-external";
			set $service_port   "80";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "external-dashboard-kubernetes-dashboard-external-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			# this location requires authentication
			auth_request        /_external-auth-Lw-Prefix;
			auth_request_set    $auth_cookie $upstream_http_set_cookie;
			add_header          Set-Cookie $auth_cookie;
			auth_request_set $authHeader0 $upstream_http_authorization;
			proxy_set_header 'Authorization' $authHeader0;
			
			set_escape_uri $escaped_request_uri $request_uri;
			error_page 401 = @027d07cc98db980f931043f6d5b9d77628a77e87;
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			auth_request_set $name_upstream_1 $upstream_cookie_name_1;
			access_by_lua_block {
				if ngx.var.name_upstream_1 ~= "" then
				ngx.header["Set-Cookie"] = "name_1=" .. ngx.var.name_upstream_1 .. ngx.var.auth_cookie:match("(; .*)")
				end
			}
			set $geoip_blocked 0;
			if ($external_dashboard_allowed_country = no) {
				set $geoip_blocked 1;
			}
			if ($remote_addr = 13.94.214.154) {
				set $geoip_blocked 0;
			}
			if ($geoip_blocked = 1) {
				return 403 "403 The Dashboard is Forbidden in your country.";
			}
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server k-t.mysoluto.com
	
	## start server kafka-load-proxy.mysoluto.com
	server {
		server_name kafka-load-proxy.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "team-ae-platform";
			set $ingress_name   "kafka-proxy";
			set $service_name   "kafka-load-proxy";
			set $service_port   "20275";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "team-ae-platform-kafka-load-proxy-20275";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass https://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server kafka-load-proxy.mysoluto.com
	
	## start server kafka-primary-proxy.mysoluto.com
	server {
		server_name kafka-primary-proxy.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "team-ae-platform";
			set $ingress_name   "kafka-proxy";
			set $service_name   "kafka-primary-proxy";
			set $service_port   "20275";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "team-ae-platform-kafka-primary-proxy-20275";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass https://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server kafka-primary-proxy.mysoluto.com
	
	## start server kafka-secondary-proxy.mysoluto.com
	server {
		server_name kafka-secondary-proxy.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "team-ae-platform";
			set $ingress_name   "kafka-proxy";
			set $service_name   "kafka-secondary-proxy";
			set $service_port   "20275";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "team-ae-platform-kafka-secondary-proxy-20275";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass https://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server kafka-secondary-proxy.mysoluto.com
	
	## start server prometheus-test-kubernetes-upgrade-with-cilium.mysoluto.com
	server {
		server_name prometheus-test-kubernetes-upgrade-with-cilium.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		# PEM sha: 
		ssl_client_certificate                  /etc/ingress-controller/ssl/ca-monitoring-promcaingress.pem;
		ssl_verify_client                       on;
		ssl_verify_depth                        3;
		
		location / {
			
			set $namespace      "monitoring";
			set $ingress_name   "prometheus-ca";
			set $service_name   "prometheus-stack-kube-prom-prometheus";
			set $service_port   "9090";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = true,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "monitoring-prometheus-stack-kube-prom-prometheus-9090";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			proxy_set_header ssl-client-verify      $ssl_client_verify;
			proxy_set_header ssl-client-subject-dn  $ssl_client_s_dn;
			proxy_set_header ssl-client-issuer-dn   $ssl_client_i_dn;
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server prometheus-test-kubernetes-upgrade-with-cilium.mysoluto.com
	
	## start server secure-demo-echo-service-load.amit.test
	server {
		server_name secure-demo-echo-service-load.amit.test ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		# Custom code snippet configured for host secure-demo-echo-service-load.amit.test
		proxy_ssl_server_name   on;
		
		location / {
			
			set $namespace      "default";
			set $ingress_name   "secure-demo-echo-service-amit-test";
			set $service_name   "demo-echo-service";
			set $service_port   "80";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = false,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "default-demo-echo-service-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server secure-demo-echo-service-load.amit.test
	
	## start server secure-demo-echo-service-load.mysoluto.com
	server {
		server_name secure-demo-echo-service-load.mysoluto.com ;
		
		listen 80 proxy_protocol ;
		listen 443 proxy_protocol ssl http2 ;
		
		set $proxy_upstream_name "-";
		
		ssl_certificate_by_lua_block {
			certificate.call()
		}
		
		location / {
			
			set $namespace      "default";
			set $ingress_name   "secure-demo-echo-service";
			set $service_name   "demo-echo-service";
			set $service_port   "80";
			set $location_path  "/";
			set $global_rate_limit_exceeding n;
			
			rewrite_by_lua_block {
				lua_ingress.rewrite({
					force_ssl_redirect = false,
					ssl_redirect = false,
					force_no_ssl_redirect = false,
					use_port_in_redirects = false,
				global_throttle = { namespace = "", limit = 0, window_size = 0, key = { }, ignored_cidrs = { } },
				})
				balancer.rewrite()
				plugins.run()
			}
			
			# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any
			# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`
			# other authentication method such as basic auth or external auth useless - all requests will be allowed.
			#access_by_lua_block {
			#}
			
			header_filter_by_lua_block {
				lua_ingress.header()
				plugins.run()
			}
			
			body_filter_by_lua_block {
				plugins.run()
			}
			
			log_by_lua_block {
				balancer.log()
				
				monitor.call()
				
				plugins.run()
			}
			
			port_in_redirect off;
			
			set $balancer_ewma_score -1;
			set $proxy_upstream_name "default-demo-echo-service-80";
			set $proxy_host          $proxy_upstream_name;
			set $pass_access_scheme  $scheme;
			
			set $pass_server_port    $proxy_protocol_server_port;
			
			set $best_http_host      $http_host;
			set $pass_port           $pass_server_port;
			
			set $proxy_alternative_upstream_name "";
			
			client_max_body_size                    1m;
			
			proxy_set_header Host                   $best_http_host;
			
			# Pass the extracted client certificate to the backend
			
			# Allow websocket connections
			proxy_set_header                        Upgrade           $http_upgrade;
			
			proxy_set_header                        Connection        $connection_upgrade;
			
			proxy_set_header X-Request-ID           $req_id;
			proxy_set_header X-Real-IP              $remote_addr;
			
			proxy_set_header X-Forwarded-For        $remote_addr;
			
			proxy_set_header X-Forwarded-Host       $best_http_host;
			proxy_set_header X-Forwarded-Port       $pass_port;
			proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
			
			proxy_set_header X-Scheme               $pass_access_scheme;
			
			# Pass the original X-Forwarded-For
			proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
			
			# mitigate HTTPoxy Vulnerability
			# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
			proxy_set_header Proxy                  "";
			
			# Custom headers to proxied server
			
			proxy_connect_timeout                   5s;
			proxy_send_timeout                      60s;
			proxy_read_timeout                      60s;
			
			proxy_buffering                         off;
			proxy_buffer_size                       32k;
			proxy_buffers                           4 32k;
			
			proxy_max_temp_file_size                1024m;
			
			proxy_request_buffering                 on;
			proxy_http_version                      1.1;
			
			proxy_cookie_domain                     off;
			proxy_cookie_path                       off;
			
			# In case of errors try the next upstream server before returning an error
			proxy_next_upstream                     error timeout;
			proxy_next_upstream_timeout             0;
			proxy_next_upstream_tries               3;
			
			proxy_pass http://upstream_balancer;
			
			proxy_redirect                          off;
			
		}
		
		# Custom code snippet configured in the configuration configmap
		location = /metrics {
			return 401;
		}
		location = /env {
			return 401;
		}
		
	}
	## end server secure-demo-echo-service-load.mysoluto.com
	
	# backend for when default-backend-service is not configured or it does not have endpoints
	server {
		listen 8181 default_server reuseport backlog=4096;
		
		set $proxy_upstream_name "internal";
		
		access_log off;
		
		location / {
			return 404;
		}
	}
	
	# default server, used for NGINX healthcheck and access to nginx stats
	server {
		listen 127.0.0.1:10246;
		set $proxy_upstream_name "internal";
		
		keepalive_timeout 0;
		gzip off;
		
		access_log off;
		
		location /healthz {
			return 200;
		}
		
		location /is-dynamic-lb-initialized {
			content_by_lua_block {
				local configuration = require("configuration")
				local backend_data = configuration.get_backends_data()
				if not backend_data then
				ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
				return
				end
				
				ngx.say("OK")
				ngx.exit(ngx.HTTP_OK)
			}
		}
		
		location /nginx_status {
			stub_status on;
		}
		
		location /configuration {
			client_max_body_size                    21m;
			client_body_buffer_size                 21m;
			proxy_buffering                         off;
			
			content_by_lua_block {
				configuration.call()
			}
		}
		
		location / {
			content_by_lua_block {
				ngx.exit(ngx.HTTP_NOT_FOUND)
			}
		}
	}
}

stream {
	lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;;";
	
	lua_shared_dict tcp_udp_configuration_data 5M;
	
	init_by_lua_block {
		collectgarbage("collect")
		
		-- init modules
		local ok, res
		
		ok, res = pcall(require, "configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		configuration = res
		end
		
		ok, res = pcall(require, "tcp_udp_configuration")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_configuration = res
		tcp_udp_configuration.prohibited_localhost_port = '10246'
		
		end
		
		ok, res = pcall(require, "tcp_udp_balancer")
		if not ok then
		error("require failed: " .. tostring(res))
		else
		tcp_udp_balancer = res
		end
	}
	
	init_worker_by_lua_block {
		tcp_udp_balancer.init_worker()
	}
	
	lua_add_variable $proxy_upstream_name;
	
	log_format log_stream '[$remote_addr] [$time_local] $protocol $status $bytes_sent $bytes_received $session_time';
	
	access_log /var/log/nginx/access.log log_stream ;
	
	error_log  /var/log/nginx/error.log debug;
	
	upstream upstream_balancer {
		server 0.0.0.1:1234; # placeholder
		
		balancer_by_lua_block {
			tcp_udp_balancer.balance()
		}
	}
	
	server {
		listen 127.0.0.1:10247;
		
		access_log off;
		
		content_by_lua_block {
			tcp_udp_configuration.call()
		}
	}
	
	# TCP services
	
	# UDP services
	
}

Full --v=5 nginx logs containing the error with SSL (with grep -i 'ssl')

2021/12/27 10:38:05 [debug] 3524#3524: *3971047 http check ssl handshake
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 http check ssl handshake
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 http check ssl handshake
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 https ssl handshake: 0x16
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL server name: "elb-host.us-east-2.elb.amazonaws.com"
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 ssl cert: connection reusable: 0
2021/12/27 10:38:05 [debug] 3524#3524: *3971048 code cache lookup (key='ssl_certificate_by_lua_nhli_eb62659df6af69ef3ea4b2f4d168e185', ref=1)
2021/12/27 10:38:05 [debug] 3524#3524: *3971048 code cache hit (key='ssl_certificate_by_lua_nhli_eb62659df6af69ef3ea4b2f4d168e185', ref=1)
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL ALPN supported by client: h2
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL ALPN supported by client: http/1.1
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL ALPN selected: h2
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL_do_handshake: -1
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL_get_error: 2
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL handshake handler: 0
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 ssl new session: 684F43EC:32:216
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL_do_handshake: 1
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL: TLSv1.2, cipher: "ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD"
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL_read: -1
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL_get_error: 2
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL buf copy: 27
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL buf copy: 13
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL to write: 40
2021/12/27 10:38:05 [debug] 3524#3524: *3971047 SSL_write: 40
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: 24
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: 27
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: 13
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: 56
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: -1
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_get_error: 2
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL buf copy: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL to write: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_write: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL buf copy: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL buf copy: 95
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL buf copy: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL buf copy: 20
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL to write: 133
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_write: 133
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL buf copy: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL to write: 9
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_write: 9
{ "type": "access_logs", "ssl_protocl": "TLSv1.2", "time": "2021-12-27T10:38:06+00:00", "remote_addr": "remote-addr","x-forward-for": "remote-addr", "request_id": "987872d970bdce15a667c172596cb05e", "remote_user":"", "bytes_sent": 142, "request_time": 0.003, "status":"200", "vhost": "secure-demo-echo-service-load.amit.test", "request_proto": "HTTP/2.0", "path": "/","request_query": "", "request_length": 47, "duration": 0.003,"method": "GET", "http_referrer": "", "http_user_agent":"curl/7.77.0", "upstream": "upstream-addr:8080", "upstream_status": "200", "upstream_latency": "0.004", "ingress": "secure-demo-echo-service-amit-test", "namespace": "default" }
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_read: 0
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_get_error: 6
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 peer shutdown SSL cleanly
2021/12/27 10:38:06 [debug] 3524#3524: *3971047 SSL_shutdown: 1

curl command and result</summary

curl -H 'Host: secure-demo-echo-service-load.amit.test' https://elb-host.us-east-2.elb.amazonaws.com/ -k -v
*   Trying elb-ip:443...
* Connected to elb-host.us-east-2.elb.amazonaws.com (elb-ip) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*  CAfile: /etc/ssl/cert.pem
*  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: CN=*.mysoluto.com
*  start date: Dec  7 00:00:00 2021 GMT
*  expire date: Dec  7 23:59:59 2022 GMT
*  issuer: C=US; O=DigiCert Inc; CN=RapidSSL TLS DV RSA Mixed SHA256 2020 CA-1
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe82380e800)
> GET / HTTP/2
> Host: secure-demo-echo-service-load.amit.test
> user-agent: curl/7.77.0
> accept: */*
> 
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
< HTTP/2 200 
< date: Mon, 27 Dec 2021 10:38:06 GMT
< content-type: text/plain; charset=utf-8
< content-length: 20
< strict-transport-security: max-age=15724800; includeSubDomains
< 
* Connection #0 to host elb-host.us-east-2.elb.amazonaws.com left intact
UserID: , UserRole: %

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 17 (10 by maintainers)

Most upvoted comments

Also, check what happens if you change the target in the curl command (slow cluster) to the ELB name and not that ....mysoluto.com. Also, don’t add -k and use -L and set protocol to http.

Something like curl -H 'Host: secure-demo-echo-service-load.mysoluto.com' http://<elb-hostname>.us-east-2.elb.amazonaws.com -L -v --trace-time

Lets focus on the problem to be solved. The info here is too cluttered. Please post one single message that contains ;

  • kubectl describe output for app-pod, app-svc, app-ing, controller-pod, controller-svc
  • your http request exact and complete
  • all controllerpod logs related to the http requests
  • post the same for both the cluster with good controller and also for the cluster with slow controller
  • Show the grafana dashboard that shows where the time is being spent in the slow cluster

This way the comparison can be seen by anyone who opens the details of this issue