ingress-nginx: TLSv1.3 ciphers/ciphersuites cannot be changed
NGINX Ingress controller version: Release v1.1.2, build bab0fbab0c1a7c3641bd379f27857113d574d904, NGINX version nginx/1.19.9
Kubernetes version: v1.21.3
Environment:
- OS: Ubuntu 20.04.3 LTS
- Kernel: 5.4.0-99-generic
- Basic cluster related info:
kubectl version:
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.3", GitCommit:"ca643a4d1f7bfe34773c74f79527be4afd95bf39", GitTreeState:"clean", BuildDate:"2021-07-15T20:59:07Z", GoVersion:"go1.16.6", Compiler:"gc", Platform:"linux/amd64"}
kubectl get nodes -o wide
kubectl get nodes
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
loving-carson-65bdbd7d4b-2grqg Ready <none> 55d v1.21.3 192.168.1.158 195.192.159.33 Ubuntu 20.04.3 LTS 5.4.0-99-generic docker://19.3.15
loving-carson-65bdbd7d4b-5rgl4 Ready <none> 30d v1.21.3 192.168.1.185 195.192.158.14 Ubuntu 20.04.3 LTS 5.4.0-99-generic docker://19.3.15
loving-carson-65bdbd7d4b-9d4j8 Ready <none> 55d v1.21.3 192.168.1.131 195.192.156.157 Ubuntu 20.04.3 LTS 5.4.0-99-generic docker://19.3.15
loving-carson-65bdbd7d4b-dkltn Ready <none> 55d v1.21.3 192.168.1.44 195.192.158.213 Ubuntu 20.04.3 LTS 5.4.0-99-generic docker://19.3.15
- How was the ingress-nginx-controller installed: Helm Chart
- If helm was used then please show output of
helm ls -A | grep -i ingress:
- If helm was used then please show output of
$ helm ls -aA | grep ingress
ingress-nginx syseleven-ingress-nginx 3 2022-04-04 09:37:54.845571954 +0000 UTC deployed ingress-nginx-4.0.18 1.1.2
- If helm was used then please show output of
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
helm install values
USER-SUPPLIED VALUES:
controller:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
topologyKey: kubernetes.io/hostname
allowSnippetAnnotations: true
config:
compute-full-forwarded-for: "true"
custom-http-errors: 502,503,504
log-format-upstream: $remote_addr - $remote_user [$time_local] $ingress_name "$request"
$status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length
$request_time [$proxy_upstream_name] [$proxy_alternative_upstream_name] $upstream_addr
$upstream_response_length $upstream_response_time $upstream_status $req_id
use-forwarded-headers: "true"
use-proxy-protocol: "true"
extraArgs:
default-backend-service: syseleven-ingress-nginx/ingress-nginx-extension
ingressClass: nginx
metrics:
enabled: true
prometheusRule:
enabled: true
rules:
- alert: NGINXConfigFailed
annotations:
description: bad ingress config - nginx config test failed
summary: uninstall the latest ingress changes to allow config reloads to
resume
expr: count(nginx_ingress_controller_config_last_reload_successful == 0) >
0
for: 1s
labels:
severity: critical
- alert: NGINXCertificateExpiry
annotations:
description: ssl certificate(s) will expire in less then a week
summary: renew expiring certificates to avoid downtime
expr: (avg(nginx_ingress_controller_ssl_expire_time_seconds) by (host) - time())
< 604800
for: 1s
labels:
severity: critical
serviceMonitor:
enabled: true
publishService:
enabled: true
replicaCount: 2
resources:
limits:
cpu: 1
memory: 256Mi
requests:
cpu: 1
memory: 256Mi
service:
annotations:
loadbalancer.openstack.org/proxy-protocol: "true"
stats:
enabled: true
updateStrategy:
type: RollingUpdate
defaultBackend:
enabled: false
rbac:
create: true
- Current State of the controller:
kubectl describe ingressclasses
kubectl describe ingressclasses
Name: nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.1.2
helm.sh/chart=ingress-nginx-4.0.18
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: syseleven-ingress-nginx
Controller: k8s.io/ingress-nginx
Events: <none>
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl get all
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ingress-nginx-controller-59bd6dd5ff-4r7gm 1/1 Running 0 4d23h 172.25.1.33 loving-carson-65bdbd7d4b-2grqg <none> <none>
pod/ingress-nginx-controller-59bd6dd5ff-tqkxg 1/1 Running 0 20d 172.25.0.61 loving-carson-65bdbd7d4b-dkltn <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 10.240.20.72 195.192.153.120 80:31006/TCP,443:30351/TCP 103d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.240.30.100 <none> 443/TCP 103d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics ClusterIP 10.240.19.212 <none> 10254/TCP 103d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/ingress-nginx-controller 2/2 2 2 103d controller k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/ingress-nginx-controller-546f5958c4 0 0 0 30d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=546f5958c4
replicaset.apps/ingress-nginx-controller-59bd6dd5ff 2 2 2 20d controller k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=59bd6dd5ff
replicaset.apps/ingress-nginx-controller-6f64dddc7c 0 0 0 103d controller k8s.gcr.io/ingress-nginx/controller:v1.1.0@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6f64dddc7c
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl describe pod
Name: ingress-nginx-controller-59bd6dd5ff-tqkxg
Namespace: syseleven-ingress-nginx
Priority: 0
Node: loving-carson-65bdbd7d4b-dkltn/192.168.1.44
Start Time: Mon, 04 Apr 2022 11:38:01 +0200
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=59bd6dd5ff
Annotations: cni.projectcalico.org/podIP: 172.25.0.61/32
kubectl.kubernetes.io/restartedAt: 2022-03-25T13:40:04+01:00
Status: Running
IP: 172.25.0.61
IPs:
IP: 172.25.0.61
Controlled By: ReplicaSet/ingress-nginx-controller-59bd6dd5ff
Containers:
controller:
Container ID: docker://9ecceaf892ffa0b48a9e088bd0ee5fd4eaf5b02dddc9fbad19a80078c9942438
Image: k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
Image ID: docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--default-backend-service=syseleven-ingress-nginx/ingress-nginx-extension
State: Running
Started: Mon, 04 Apr 2022 11:38:07 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 256Mi
Requests:
cpu: 1
memory: 256Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-59bd6dd5ff-tqkxg (v1:metadata.name)
POD_NAMESPACE: syseleven-ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zw5m2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-zw5m2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: ingress-nginx-controller-59bd6dd5ff-4r7gm
Namespace: syseleven-ingress-nginx
Priority: 0
Node: loving-carson-65bdbd7d4b-2grqg/192.168.1.158
Start Time: Wed, 20 Apr 2022 09:41:14 +0200
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=59bd6dd5ff
Annotations: cni.projectcalico.org/podIP: 172.25.1.33/32
kubectl.kubernetes.io/restartedAt: 2022-03-25T13:40:04+01:00
Status: Running
IP: 172.25.1.33
IPs:
IP: 172.25.1.33
Controlled By: ReplicaSet/ingress-nginx-controller-59bd6dd5ff
Containers:
controller:
Container ID: docker://53154e06133fd91d6093147e684df4748beddf50d8fe6e0802ba0d9d1792f006
Image: k8s.gcr.io/ingress-nginx/controller:v1.1.2@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
Image ID: docker-pullable://k8s.gcr.io/ingress-nginx/controller@sha256:28b11ce69e57843de44e3db6413e98d09de0f6688e33d4bd384002a44f78405c
Ports: 80/TCP, 443/TCP, 10254/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
--default-backend-service=syseleven-ingress-nginx/ingress-nginx-extension
State: Running
Started: Wed, 20 Apr 2022 09:41:20 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 256Mi
Requests:
cpu: 1
memory: 256Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-59bd6dd5ff-4r7gm (v1:metadata.name)
POD_NAMESPACE: syseleven-ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ts9sh (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-ts9sh:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
kubectl describe svc
Name: ingress-nginx-controller
Namespace: syseleven-ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.1.2
helm.sh/chart=ingress-nginx-4.0.18
Annotations: loadbalancer.openstack.org/proxy-protocol: true
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: syseleven-ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.240.20.72
IPs: 10.240.20.72
LoadBalancer Ingress: 195.192.153.120
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31006/TCP
Endpoints: 172.25.0.61:80,172.25.1.33:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30351/TCP
Endpoints: 172.25.0.61:443,172.25.1.33:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
- Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl get all,ing -n APP-NS
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 0 30d 172.25.0.50 loving-carson-65bdbd7d4b-dkltn <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/nginx ClusterIP 10.240.31.28 <none> 80/TCP 30d run=nginx
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/test-ingress <none> www2.ldawert.metakube.io 195.192.153.120 80, 443 30d
kubectl -n <appnamespace> describe ing <ingressname>
kubectl describe ing test-ingress
Name: test-ingress
Namespace: ldawert
Address: 195.192.153.120
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
www2.ldawert.metakube.io-tls terminates www2.ldawert.metakube.io
Rules:
Host Path Backends
---- ---- --------
www2.ldawert.metakube.io
/ nginx:80 (172.25.0.50:80)
Annotations: cert-manager.io/cluster-issuer: letsencrypt-production
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/server-snippet:
ssl_protocols TLSv1.2 TLSv1.3;
ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
nginx.ingress.kubernetes.io/ssl-ciphers:
'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
Events: <none>
What happened:
Trying to configure TLSv1.3 ciphers with:
metadata:
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
ssl_protocols TLSv1.2 TLSv1.3;
ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
nginx.ingress.kubernetes.io/ssl-ciphers: |
'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384' # TLSv1.2 works
The configuration made by the server-snippet is loaded correctly into the ingress controller pod into the server config block:
< ... >
server {
server_name www2.ldawert.metakube.io ;
listen 80 proxy_protocol ;
listen 443 proxy_protocol ssl http2 ;
set $proxy_upstream_name "-";
ssl_certificate_by_lua_block {
certificate.call()
}
ssl_ciphers 'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
;
# Custom code snippet configured for host www2.ldawert.metakube.io
ssl_protocols TLSv1.2 TLSv1.3;
ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
< ... >
However when testing the TLS ciphers for example with nmap it shows that still the default ciphers for TLSv1.3 are being used:
$ nmap -sV --script ssl-enum-ciphers -p 443 www2.ldawert.metakube.io
Starting Nmap 7.92 ( https://nmap.org ) at 2022-04-25 09:55 CEST
Nmap scan report for www2.ldawert.metakube.io (195.192.153.120)
Host is up (0.020s latency).
PORT STATE SERVICE VERSION
443/tcp open ssl/http nginx (reverse proxy)
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| compressors:
| NULL
| cipher preference: server
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: A
What you expected to happen:
Setting ssl_conf_command Ciphersuites via nginx.ingress.kubernetes.io/server-snippet should configure the used TLSv1.3 ciphers for the server block it is configured in.
How to reproduce it:
- install k8s cluster (e.g. minikube)
- install ingress nginx via helm chart
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx
- create app
create app
$ cat app.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: nginx
status:
loadBalancer: {}
$ kubectl apply -f app.yaml
- create ingress
create ingress
$ cat ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-ciphers: |
'ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384'
nginx.ingress.kubernetes.io/server-snippet: |
ssl_protocols TLSv1.2 TLSv1.3;
ssl_conf_command Ciphersuites "TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384";
kubernetes.io/ingress.class: nginx
name: test-ingress
spec:
rules:
- host: testdomain.local
http:
paths:
- backend:
service:
name: nginx
port:
number: 80
path: /
pathType: ImplementationSpecific
$ kubectl apply -f ingress.yaml
- create debugging pod (with nmap binary):
create debugging pod
$ cat netshoot.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: tmp-shell
name: tmp-shell
spec:
containers:
- image: nicolaka/netshoot
name: tmp-shell
resources: {}
command:
- sleep
- "100000"
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
$ kubectl apply -f netshoot.yaml
- Get IP of ingress pod
$ kubectl get pod <ingress-controller> -o jsonpath='{.status.podIP}'
- Check ciphers in debugging pod
check ciphers
$ kubectl exec -ti tmp-shell -- bash
bash-5.1$ echo "<ingress-controller-ip> testdomain.local" >> /etc/hosts
bash-5.1$ nmap -sV --script ssl-enum-ciphers -p 443 testdomain.local
Starting Nmap 7.92 ( https://nmap.org ) at 2022-04-25 08:18 UTC
Nmap scan report for testdomain.local (172.17.0.4)
Host is up (0.000044s latency).
PORT STATE SERVICE VERSION
443/tcp open ssl/http nginx (reverse proxy)
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| compressors:
| NULL
| cipher preference: server
| TLSv1.3:
| ciphers:
| TLS_AKE_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A
| TLS_AKE_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A
| TLS_AKE_WITH_AES_128_GCM_SHA256 (ecdh_x25519) - A
| cipher preference: server
|_ least strength: A
MAC Address: 02:42:AC:11:00:04 (Unknown)
About this issue
- Original URL
- State: open
- Created 2 years ago
- Reactions: 5
- Comments: 15 (6 by maintainers)
It’s very frustrating that this is not in the docs, because they make it look like TLS_1.2 and 1.3 can be configured together: https://kubernetes.github.io/ingress-nginx/user-guide/tls/#default-tls-version-and-ciphers and https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#ssl-ciphers Additionally the helm chart docs that I can’t seem to locate at the moment also made it look configurable for both 1.2 and 1.3. However, unfortunately the openssl project drastically changed how cipher-suite can be configured at runtime between 1.2 and 1.3, and the nginx developers have not been shy with their disapproval.
I would love to see this at least mentioned in the docs somewhere, but the config directive of ‘ssl-ciphers’ will only apply to TLS_1.2 (and earlier). For TLS_1.3 you have to use a generic http config directive called http-snippet that allows you to drop in any raw nginx config (and hope it’s formatted correctly). This is what we have tested to work (from the ingress-nginx CM)
ssl-ciphers: ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256 http-snippet: ssl_conf_command Ciphersuites TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256;
note the ‘;’ at the end of the http-snippet line. It’s been well over a year since we added this workAround, and the latest ingress-nginx still does not have a fix.
@razholio There is not enough resources so sometimes it takes too long. If you submit a PR for fixing the docs, I am sure it will get appropriate attention.
Yep that was it. So gRPC TLS backends that only support tls 1.3 fail because the default grpc_ssl_protocols doesn’t have tls 1.3 enabled.
The following worked for gRPC with TLS termination at ingress, with TLS enabled on the backend: