ingress-nginx: ingress-nginx returns 503 during nginx process reload
What happened:
Ingress-nginx returns 503 while NGINX reload triggered due to a change in configuration. We observe it only if ingress-nginx is pointing to service type ExternalName. We don’t see the issue with other types of services.
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:45 [debug] 100#100: *8570 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 101#101: *8585 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 100#100: *8588 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 101#101: *8598 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 100#100: *8608 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 101#101: *8619 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 100#100: *8621 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:46 [debug] 101#101: *8623 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:47 [debug] 101#101: *8625 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:49:47 [debug] 100#100: *8649 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:05 [debug] 171#171: *9045 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:05 [debug] 170#170: *9047 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:05 [debug] 171#171: *9049 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:05 [debug] 170#170: *9054 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:05 [debug] 170#170: *9059 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:06 [debug] 171#171: *9080 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:06 [debug] 170#170: *9083 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:06 [debug] 171#171: *9093 lua thread aborting request with status 503
ingress-nginx-controller-6ffd7586d8-xl7lm controller 2023/02/28 11:50:06 [debug] 171#171: *9106 lua thread aborting request with status 503
What you expected to happen: Ingress-nginx should return 200.
NGINX Ingress controller version:
Release: v1.6.4
Build: 69e8833858fb6bda12a44990f1d5eaa7b13f4b75
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6
Kubernetes version:
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.16-eks-48e63af", GitCommit:"e6332a8a3feb9e0fe3db851878f88cb73d49dd7a", GitTreeState:"clean", BuildDate:"2023-01-24T19:18:15Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
- Cloud provider or hardware configuration:
AWS EKS
instance-type=t3a.large
arch=amd64
- OS (e.g. from /etc/os-release):
Amazon Linux 2
AMI v20230203
-
Kernel (e.g.
uname -a
):Linux ip-10-0-107-202.eu-west-1.compute.internal 5.10.167-147.601.amzn2.x86_64 #1 SMP Tue Feb 14 21:50:23 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
-
Install tools: Terraform with module terraform-aws-eks v18.31.2
-
Basic cluster related info: Nodes:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-10-0-104-148.eu-west-1.compute.internal Ready <none> 7h7m v1.23.15-eks-49d8fe8 10.0.104.148 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-107-202.eu-west-1.compute.internal Ready <none> 7h7m v1.23.15-eks-49d8fe8 10.0.107.202 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-110-109.eu-west-1.compute.internal Ready <none> 7h8m v1.23.15-eks-49d8fe8 10.0.110.109 <none> Amazon Linux 2 5.4.228-131.415.amzn2.x86_64 containerd://1.6.6
ip-10-0-113-236.eu-west-1.compute.internal NotReady <none> 7h7m v1.23.15-eks-49d8fe8 10.0.113.236 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-114-15.eu-west-1.compute.internal Ready <none> 7h8m v1.23.15-eks-49d8fe8 10.0.114.15 <none> Amazon Linux 2 5.4.228-131.415.amzn2.x86_64 containerd://1.6.6
ip-10-0-124-42.eu-west-1.compute.internal Ready <none> 7h7m v1.23.15-eks-49d8fe8 10.0.124.42 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-127-174.eu-west-1.compute.internal Ready <none> 7h7m v1.23.15-eks-49d8fe8 10.0.127.174 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-86-183.eu-west-1.compute.internal Ready <none> 7h7m v1.23.15-eks-49d8fe8 10.0.86.183 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-94-36.eu-west-1.compute.internal Ready <none> 3h41m v1.23.15-eks-49d8fe8 10.0.94.36 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-95-227.eu-west-1.compute.internal Ready <none> 7h7m v1.23.15-eks-49d8fe8 10.0.95.227 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
ip-10-0-95-78.eu-west-1.compute.internal Ready <none> 7h8m v1.23.15-eks-49d8fe8 10.0.95.78 <none> Amazon Linux 2 5.4.228-131.415.amzn2.x86_64 containerd://1.6.6
ip-10-0-98-142.eu-west-1.compute.internal Ready <none> 163m v1.23.15-eks-49d8fe8 10.0.98.142 <none> Amazon Linux 2 5.10.167-147.601.amzn2.x86_64 containerd://1.6.6
- How was the ingress-nginx-controller installed:
$ helm ls -A | grep -i ingress
ingress-nginx ingress-nginx 1 2023-02-28 08:34:44.9579958 +0100 CET deployed ingress-nginx-4.5.2 1.6.4
ingress-nginx-validating-webhook ingress-nginx 1 2023-02-28 08:27:31.544651662 +0100 CET deployed ingress-nginx-4.5.2 1.6.4
prometheus-blackbox-exporter ingress-nginx 1 2023-02-28 08:33:03.718464989 +0100 CET deployed prometheus-blackbox-exporter-4.15.0 0.19.0
ingress-nginx-controller
$helm -n ingress-nginx get values ingress-nginx
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
enabled: false
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/instance
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
chroot: false
config:
annotation-value-word-blocklist: load_module,lua,root,serviceaccount,alias
force-ssl-redirect: "true"
hsts-max-age: "31536000"
http-snippet: |
# Fix status
# If status is 000, set to 499
map $status $status_real {
000 499;
default $status;
}
# Fix upstream_status
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_status $upstream_status_real {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_status;
}
# If status is unset or 499, set to null
map $status_real $upstream_status_json {
"" null;
499 null;
default $upstream_status_real;
}
# Fix upstream_addr
# If there are multiple values, select the last one
map $upstream_addr $upstream_addr_json {
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_addr;
}
# Fix upstream_response_length
# If there are multiple values, select the last one
# If unset, set to 0
map $upstream_response_length $upstream_response_length_json {
"" 0;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_response_length;
}
# Fix upstream_response_time
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_response_time $upstream_response_time_json {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_response_time;
}
# Fix upstream_connect_time
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_connect_time $upstream_connect_time_1 {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_connect_time;
}
# If upstream_response_length is (0, unset), set to null
map $upstream_response_length_json $upstream_connect_time_2 {
0 null;
"" null;
default $upstream_connect_time_1;
}
# If status if 499, set to null
map $status_real $upstream_connect_time_json {
499 null;
default $upstream_connect_time_2;
}
# Fix upstream_header_time
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_header_time $upstream_header_time_1 {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_header_time;
}
# If upstream_response_length is (0, unset), set to null
map $upstream_response_length_json $upstream_header_time_2 {
0 null;
"" null;
default $upstream_header_time_1;
}
# If status if 499, set to null
map $status_real $upstream_header_time_json {
499 null;
default $upstream_header_time_2;
}
keep-alive-requests: "10000"
log-format-escape-json: "true"
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$remote_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "remote_user": "$remote_user",
"bytes_sent": $bytes_sent, "request_time": $request_time, "status": $status_real,
"vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query":
"$args", "request_length": $request_length, "duration": $request_time, "method":
"$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent",
"proxy_upstream_name": "$proxy_upstream_name", "upstream_addr": "$upstream_addr_json",
"upstream_response_length": $upstream_response_length_json, "upstream_response_time":
$upstream_response_time_json, "upstream_connect_time": $upstream_connect_time_json,
"upstream_header_time": $upstream_header_time_json, "upstream_status": $upstream_status_json,
"namespace": "$namespace", "ingress_name": "$ingress_name", "service_name":
"$service_name", "service_port": "$service_port" }'
max-worker-connections: "65536"
proxy-real-ip-cidr: 10.0.64.0/18,2a05:c018:41a:4000::/56,200.92.0.0/16
server-tokens: "false"
ssl-protocols: TLSv1.2 TLSv1.3
upstream-keepalive-connections: "1000"
use-forwarded-headers: "true"
ingressClass: nginx
ingressClassResource:
enabled: false
keda:
behavior:
scaleDown:
policies:
- periodSeconds: 600
type: Pods
value: 1
stabilizationWindowSeconds: 1800
enabled: true
maxReplicas: 3
minReplicas: 3
triggers:
- metadata:
metricName: ingress_node_cpu_util
query: |
some_query
serverAddress: http://kube-prometheus-stack-thanos-query-frontend.kube-system:9090/
threshold: "60"
type: prometheus
kind: Deployment
livenessProbe: null
metrics:
enabled: true
prometheusRule:
additionalLabels:
custom-system: "true"
enabled: true
rules:
- alert: NGINXConfigFailed
annotations:
description: bad ingress config - nginx config test failed
expr: count(nginx_ingress_controller_config_last_reload_successful == 0) >
0
for: 1s
labels:
severity: critical
- expr: sum(rate(node_cpu_seconds_total{mode!="idle",mode!="iowait",mode!="steal"}[1m]))
WITHOUT (cpu, mode) / ON(instance) GROUP_LEFT() count(sum(node_cpu_seconds_total)
BY (instance, cpu)) BY (instance)
record: instance:node_cpu:ratio_1m
serviceMonitor:
additionalLabels:
custom-system: "true"
enabled: true
minAvailable: 2
nodeSelector:
company.org/role: ingress
podAnnotations:
co.elastic.logs/enabled: "true"
publishService:
enabled: false
replicaCount: 3
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: ci-eks-test-nlb-access-logs
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: ingress-nginx-controller
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: "2"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:ACC_ID:certificate/1c2bb43e-be5b-42e1-2dbe-b3a765fc41a9
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: company.org/role=ingress
service.beta.kubernetes.io/aws-load-balancer-type: nlb
enableHttp: false
enabled: true
externalTrafficPolicy: Local
loadBalancerSourceRanges:
- 127.0.0.1/32
sysctls:
net.core.somaxconn: "65535"
net.ipv4.ip_local_port_range: 1024 65535
net.ipv4.tcp_fin_timeout: "15"
net.ipv4.tcp_max_syn_backlog: "3240000"
net.ipv4.tcp_max_tw_buckets: "5880000"
net.ipv4.tcp_no_metrics_save: "1"
net.ipv4.tcp_syn_retries: "2"
net.ipv4.tcp_synack_retries: "2"
net.ipv4.tcp_tw_reuse: "1"
net.netfilter.nf_conntrack_tcp_timeout_established: "120"
terminationGracePeriodSeconds: 120
tolerations:
- key: node.kubernetes.io/ingress
operator: Exists
updateStrategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 1
type: RollingUpdate
watchIngressWithoutClass: true
defaultBackend:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/instance
operator: In
values:
- ingress-nginx
- key: app.kubernetes.io/component
operator: In
values:
- default-backend
topologyKey: topology.kubernetes.io/zone
weight: 100
enabled: true
image:
allowPrivilegeEscalation: false
pullPolicy: IfNotPresent
readOnlyRootFilesystem: false
repository: artifacts.company.org/cloudinfra/docker-images/default-backend
runAsNonRoot: true
runAsUser: 1001
tag: 1.0.7
nodeSelector:
company.org/role: ingress
replicaCount: 3
serviceAccount:
automountServiceAccountToken: false
tolerations:
- key: node.kubernetes.io/ingress
operator: Exists
podSecurityPolicy:
enabled: true
ingress-nginx-validating-webhook
$ helm -n ingress-nginx get values ingress-nginx-validating-webhook
USER-SUPPLIED VALUES:
controller:
admissionWebhooks:
enabled: true
chroot: false
config:
annotation-value-word-blocklist: load_module,lua,root,serviceaccount,alias
force-ssl-redirect: "true"
hsts-max-age: "31536000"
http-snippet: |
# Fix status
# If status is 000, set to 499
map $status $status_real {
000 499;
default $status;
}
# Fix upstream_status
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_status $upstream_status_real {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_status;
}
# If status is unset or 499, set to null
map $status_real $upstream_status_json {
"" null;
499 null;
default $upstream_status_real;
}
# Fix upstream_addr
# If there are multiple values, select the last one
map $upstream_addr $upstream_addr_json {
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_addr;
}
# Fix upstream_response_length
# If there are multiple values, select the last one
# If unset, set to 0
map $upstream_response_length $upstream_response_length_json {
"" 0;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_response_length;
}
# Fix upstream_response_time
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_response_time $upstream_response_time_json {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_response_time;
}
# Fix upstream_connect_time
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_connect_time $upstream_connect_time_1 {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_connect_time;
}
# If upstream_response_length is (0, unset), set to null
map $upstream_response_length_json $upstream_connect_time_2 {
0 null;
"" null;
default $upstream_connect_time_1;
}
# If status if 499, set to null
map $status_real $upstream_connect_time_json {
499 null;
default $upstream_connect_time_2;
}
# Fix upstream_header_time
# If there are multiple values, select the last one
# If unset, set to null
map $upstream_header_time $upstream_header_time_1 {
"" null;
"~,\s*([^,]+)$" "$1";
"~:\s+(\S+)$" "$1";
default $upstream_header_time;
}
# If upstream_response_length is (0, unset), set to null
map $upstream_response_length_json $upstream_header_time_2 {
0 null;
"" null;
default $upstream_header_time_1;
}
# If status if 499, set to null
map $status_real $upstream_header_time_json {
499 null;
default $upstream_header_time_2;
}
keep-alive-requests: "10000"
log-format-escape-json: "true"
log-format-upstream: '{ "time": "$time_iso8601", "remote_addr": "$remote_addr",
"x-forward-for": "$proxy_add_x_forwarded_for", "remote_user": "$remote_user",
"bytes_sent": $bytes_sent, "request_time": $request_time, "status": $status_real,
"vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query":
"$args", "request_length": $request_length, "duration": $request_time, "method":
"$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent",
"proxy_upstream_name": "$proxy_upstream_name", "upstream_addr": "$upstream_addr_json",
"upstream_response_length": $upstream_response_length_json, "upstream_response_time":
$upstream_response_time_json, "upstream_connect_time": $upstream_connect_time_json,
"upstream_header_time": $upstream_header_time_json, "upstream_status": $upstream_status_json,
"namespace": "$namespace", "ingress_name": "$ingress_name", "service_name":
"$service_name", "service_port": "$service_port" }'
max-worker-connections: "65536"
proxy-real-ip-cidr: 10.0.64.0/18,2a05:c018:41a:4000::/56,200.92.0.0/16
server-tokens: "false"
ssl-protocols: TLSv1.2 TLSv1.3
upstream-keepalive-connections: "1000"
use-forwarded-headers: "true"
ingressClass: nginx
ingressClassResource:
enabled: false
kind: Deployment
metrics:
enabled: true
serviceMonitor:
additionalLabels:
custom-system: "true"
enabled: true
minAvailable: 2
publishService:
enabled: false
replicaCount: 3
resources:
requests:
cpu: 100m
memory: 256Mi
service:
enabled: false
watchIngressWithoutClass: true
podSecurityPolicy:
enabled: true
- Current State of the controller:
kubectl describe ingressclasses
:
Name: nginx
Labels: <none>
Annotations: ingressclass.kubernetes.io/is-default-class: true
Controller: k8s.io/ingress-nginx
Events: <none>
kubectl -n <ingresscontrollernamespace> get all -o wide
kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ingress-nginx-controller-6ffd7586d8-hn5rs 1/1 Running 0 3h33m 200.91.104.196 ip-10-0-107-202.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-6ffd7586d8-mnlxd 1/1 Running 0 3h33m 200.90.36.173 ip-10-0-95-227.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-controller-6ffd7586d8-xl7lm 1/1 Running 0 3h33m 200.92.221.142 ip-10-0-127-174.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-defaultbackend-584b6ff6d7-b6snq 1/1 Running 0 7h38m 200.92.124.36 ip-10-0-127-174.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-defaultbackend-584b6ff6d7-bcjjn 1/1 Running 0 7h38m 200.91.5.216 ip-10-0-107-202.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-defaultbackend-584b6ff6d7-g2s2d 1/1 Running 0 7h38m 200.90.246.135 ip-10-0-95-227.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-validating-webhook-controller-646dfb75c5-cs589 1/1 Running 0 3h35m 200.91.85.108 ip-10-0-98-142.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-validating-webhook-controller-646dfb75c5-rwncg 1/1 Running 0 3h35m 200.92.67.252 ip-10-0-124-42.eu-west-1.compute.internal <none> <none>
pod/ingress-nginx-validating-webhook-controller-646dfb75c5-wh5j2 1/1 Running 0 3h41m 200.91.242.246 ip-10-0-104-148.eu-west-1.compute.internal <none> <none>
pod/prometheus-blackbox-exporter-64bf486678-w2w6n 1/1 Running 0 4h18m 200.90.25.168 ip-10-0-94-36.eu-west-1.compute.internal <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ingress-nginx-controller LoadBalancer 172.30.200.138 aa04b648d245a43999dd70d0843d610e-c9ab22689b4e7ddd.elb.eu-west-1.amazonaws.com 443:31277/TCP 7h38m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-metrics ClusterIP 172.30.156.20 <none> 10254/TCP 7h38m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-private LoadBalancer 172.30.96.42 a1900ec71896c409aaf2d3fb0017c7e1-f06dd4ff557b9b45.elb.eu-west-1.amazonaws.com 80:30726/TCP,443:31780/TCP 7h38m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-private-custom LoadBalancer 172.30.156.41 ab1601273fb6a4da78b0c4fad3a90321-2ba77692ef6f04a3.elb.eu-west-1.amazonaws.com 80:31355/TCP,443:31161/TCP 7h38m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-public-custom LoadBalancer 172.30.204.105 al868f96723bb44a98d19c36a5ce249d-ba73a9c01ac8570b.elb.eu-west-1.amazonaws.com 443:32325/TCP 7h38m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-defaultbackend ClusterIP 172.30.205.51 <none> 80/TCP 7h38m app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-validating-webhook-controller-admission ClusterIP 172.30.228.1 <none> 443/TCP 7h45m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-validating-webhook,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-validating-webhook-controller-metrics ClusterIP 172.30.101.161 <none> 10254/TCP 7h45m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-validating-webhook,app.kubernetes.io/name=ingress-nginx
service/prometheus-blackbox-exporter ClusterIP 172.30.213.198 <none> 9115/TCP 7h39m app.kubernetes.io/instance=prometheus-blackbox-exporter,app.kubernetes.io/name=prometheus-blackbox-exporter
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/ingress-nginx-controller 3/3 3 3 7h38m controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/ingress-nginx-defaultbackend 3/3 3 3 7h38m ingress-nginx-default-backend artifacts.company.org/cloudinfra/docker-images/default-backend:1.0.7 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
deployment.apps/ingress-nginx-validating-webhook-controller 3/3 3 3 7h45m controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-validating-webhook,app.kubernetes.io/name=ingress-nginx
deployment.apps/prometheus-blackbox-exporter 1/1 1 1 7h39m blackbox-exporter prom/blackbox-exporter:v0.19.0 app.kubernetes.io/instance=prometheus-blackbox-exporter,app.kubernetes.io/name=prometheus-blackbox-exporter
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/ingress-nginx-controller-67db47cff8 0 0 0 7h38m controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=67db47cff8
replicaset.apps/ingress-nginx-controller-6ffd7586d8 3 3 3 3h33m controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=6ffd7586d8
replicaset.apps/ingress-nginx-defaultbackend-584b6ff6d7 3 3 3 7h38m ingress-nginx-default-backend artifacts.company.org/cloudinfra/docker-images/default-backend:1.0.7 app.kubernetes.io/component=default-backend,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=584b6ff6d7
replicaset.apps/ingress-nginx-validating-webhook-controller-59697c9545 0 0 0 4h56m controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-validating-webhook,app.kubernetes.io/name=ingress-nginx,pod-template-hash=59697c9545
replicaset.apps/ingress-nginx-validating-webhook-controller-646dfb75c5 3 3 3 7h45m controller registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-validating-webhook,app.kubernetes.io/name=ingress-nginx,pod-template-hash=646dfb75c5
replicaset.apps/prometheus-blackbox-exporter-64bf486678 1 1 1 7h39m blackbox-exporter prom/blackbox-exporter:v0.19.0 app.kubernetes.io/instance=prometheus-blackbox-exporter,app.kubernetes.io/name=prometheus-blackbox-exporter,pod-template-hash=64bf486678
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/keda-hpa-ingress-nginx-controller Deployment/ingress-nginx-controller 40089m/60 (avg) 3 3 3 7h38m
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
$ k describe po ingress-nginx-controller-6ffd7586d8-xl7lm
Name: ingress-nginx-controller-6ffd7586d8-xl7lm
Namespace: ingress-nginx
Priority: 0
Node: ip-10-0-127-174.eu-west-1.compute.internal/10.0.127.174
Start Time: Tue, 28 Feb 2023 12:40:15 +0100
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=6ffd7586d8
Annotations: co.elastic.logs/enabled: true
kubernetes.io/psp: ingress-nginx
Status: Running
IP: 200.92.221.142
IPs:
IP: 200.92.221.142
Controlled By: ReplicaSet/ingress-nginx-controller-6ffd7586d8
Containers:
controller:
Container ID: containerd://1f39b7582bcc4a16185a28c572381f085016ce1415e1805e9a15d8773538060b
Image: registry.k8s.io/ingress-nginx/controller:v1.6.4@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:15be4666c53052484dd2992efacf2f50ea77a78ae8aa21ccd91af6baaa7ea22f
Ports: 80/TCP, 443/TCP, 10254/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/ingress-nginx-defaultbackend
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--watch-ingress-without-class=true
--v=5
State: Running
Started: Tue, 28 Feb 2023 12:40:16 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-6ffd7586d8-xl7lm (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zhgh2 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-zhgh2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: company.org/role=ingress
kubernetes.io/os=linux
Tolerations: node.kubernetes.io/ingress op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name: ingress-nginx-controller-private
Namespace: ingress-nginx
Labels: <none>
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["a1900ec71896c409aaf2d3fb0017c7e1-f06dd4ff557b9b45.elb.eu-west-1.amazonaws.com"],"port":80,"protocol":"TCP","serviceName":"...
service.beta.kubernetes.io/aws-load-balancer-access-log-enabled: true
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-name: ci-eks-test-nlb-access-logs
service.beta.kubernetes.io/aws-load-balancer-access-log-s3-bucket-prefix: ingress-nginx-controller-private
service.beta.kubernetes.io/aws-load-balancer-additional-resource-tags: cluster=ci-eks-test
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: ssl
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true
service.beta.kubernetes.io/aws-load-balancer-healthcheck-healthy-threshold: 2
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: 10
service.beta.kubernetes.io/aws-load-balancer-healthcheck-unhealthy-threshold: 2
service.beta.kubernetes.io/aws-load-balancer-internal: true
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:eu-west-1:ACC_ID:certificate/ad2320ef-d008-443d-bb4e-abc408efdd70
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-target-node-labels: company.org/role=ingress
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 172.30.96.42
IPs: 172.30.96.42
LoadBalancer Ingress: a1900ec71896c409aaf2d3fb0017c7e1-f06dd4ff557b9b45.elb.eu-west-1.amazonaws.com
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30726/TCP
Endpoints: 200.91.36.173:80,200.91.104.196:80,200.92.221.142:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31780/TCP
Endpoints: 200.91.36.173:443,200.91.104.196:443,200.92.221.142:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 31564
Events: <none>
- Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ext-service ExternalName <none> tester.custom-pre-acc1.np.aws.company.org <none> 4h38m <none>
service/ext-service1 ExternalName <none> tester1.custom-pre-acc1.np.aws.company.org <none> 3h51m <none>
service/ext-service10 ExternalName <none> tester10.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service11 ExternalName <none> tester11.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service12 ExternalName <none> tester12.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service125 ExternalName <none> tester125.custom-pre-acc1.np.aws.company.org <none> 5h55m <none>
service/ext-service13 ExternalName <none> tester13.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service14 ExternalName <none> tester14.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service15 ExternalName <none> tester15.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service16 ExternalName <none> tester16.custom-pre-acc1.np.aws.company.org <none> 3h49m <none>
service/ext-service17 ExternalName <none> tester17.custom-pre-acc1.np.aws.company.org <none> 3h49m <none>
service/ext-service18 ExternalName <none> tester18.custom-pre-acc1.np.aws.company.org <none> 3h39m <none>
service/ext-service19 ExternalName <none> tester19.custom-pre-acc1.np.aws.company.org <none> 3h39m <none>
service/ext-service2 ExternalName <none> tester2.custom-pre-acc1.np.aws.company.org <none> 3h51m <none>
service/ext-service20 ExternalName <none> tester20.custom-pre-acc1.np.aws.company.org <none> 3h38m <none>
service/ext-service3 ExternalName <none> tester3.custom-pre-acc1.np.aws.company.org <none> 3h51m <none>
service/ext-service4 ExternalName <none> tester4.custom-pre-acc1.np.aws.company.org <none> 3h51m <none>
service/ext-service5 ExternalName <none> tester5.custom-pre-acc1.np.aws.company.org <none> 3h51m <none>
service/ext-service6 ExternalName <none> tester6.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service7 ExternalName <none> tester7.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service8 ExternalName <none> tester8.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
service/ext-service9 ExternalName <none> tester9.custom-pre-acc1.np.aws.company.org <none> 3h50m <none>
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/ext-ingress nginx income.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h55m
ingress.networking.k8s.io/ext-ingress1 nginx income1.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h51m
ingress.networking.k8s.io/ext-ingress10 nginx income10.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress11 nginx income11.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress12 nginx income12.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress13 nginx income13.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress14 nginx income14.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress15 nginx income15.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress16 nginx income16.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h49m
ingress.networking.k8s.io/ext-ingress17 nginx income17.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h49m
ingress.networking.k8s.io/ext-ingress18 nginx income18.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h39m
ingress.networking.k8s.io/ext-ingress19 nginx income19.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h39m
ingress.networking.k8s.io/ext-ingress2 nginx income2.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h51m
ingress.networking.k8s.io/ext-ingress20 nginx income20.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h38m
ingress.networking.k8s.io/ext-ingress3 nginx income3.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h51m
ingress.networking.k8s.io/ext-ingress4 nginx income4.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h51m
ingress.networking.k8s.io/ext-ingress5 nginx income5.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h51m
ingress.networking.k8s.io/ext-ingress6 nginx income6.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress7 nginx income7.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress8 nginx income8.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
ingress.networking.k8s.io/ext-ingress9 nginx income9.m11hrk.infrastructure-testing.np.aws.company.org 10.0.107.202,10.0.127.174,10.0.95.227 80, 443 3h50m
kubectl -n <appnamespace> describe ing <ingressname>
Name: ext-ingress
Labels: <none>
Namespace: tester
Address: 10.28.107.202,10.28.127.174,10.28.95.227
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
SNI routes income.m11hrk.infrastructure-testing.np.aws.company.org
Rules:
Host Path Backends
---- ---- --------
income.m11hrk.infrastructure-testing.np.aws.company.org
/ ext-service:443 (<error: endpoints "ext-service" not found>)
Annotations: field.cattle.io/publicEndpoints:
[{"addresses":["10.28.107.202","10.28.127.174","10.28.95.227"],"port":443,"protocol":"HTTPS","serviceName":"tester:ext-service","ingressNa...
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 4m39s (x936 over 3h59m) nginx-ingress-controller Scheduled for sync
Normal Sync 4m39s (x920 over 3h54m) nginx-ingress-controller Scheduled for sync
Normal Sync 4m14s (x882 over 3h44m) nginx-ingress-controller Scheduled for sync
Normal Sync 2m39s (x919 over 3h52m) nginx-ingress-controller Scheduled for sync
Normal Sync 92s (x920 over 3h52m) nginx-ingress-controller Scheduled for sync
Normal Sync 92s (x920 over 3h51m) nginx-ingress-controller Scheduled for sync
- If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag
$ curl -v -I -s -k -X GET https://income.m11hrk.infrastructure-testing.np.aws.company.org/oauth2/health/
* Trying 10.0.115.167:443...
* Connected to income.m11hrk.infrastructure-testing.np.aws.company.org (10.0.115.167) port 443 (#0)
* ALPN: offers h2
* ALPN: offers http/1.1
} [5 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.3 (OUT), TLS handshake, Client hello (1):
} [512 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.3 (IN), TLS handshake, Server hello (2):
{ [104 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (IN), TLS handshake, Certificate (11):
{ [949 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (IN), TLS handshake, Server key exchange (12):
{ [333 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (IN), TLS handshake, Server finished (14):
{ [4 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
} [70 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
} [1 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (OUT), TLS handshake, Finished (20):
} [16 bytes data]
* [CONN-0-0][CF-SSL] TLSv1.2 (IN), TLS handshake, Finished (20):
{ [16 bytes data]
* SSL connection using TLSv1.2 / ECDHE-RSA-AES128-GCM-SHA256
* ALPN: server accepted h2
* Server certificate:
* subject: O=ACME Examples, Inc; CN=*.m11hrk.infrastructure-testing.np.aws.company.org
* start date: Feb 28 07:08:29 2023 GMT
* expire date: Apr 29 07:08:29 2023 GMT
* issuer: O=ACME Examples, Inc; CN=*.m11hrk.infrastructure-testing.np.aws.company.org
* SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multiplexing
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
} [5 bytes data]
* h2h3 [:method: GET]
* h2h3 [:path: /oauth2/health/]
* h2h3 [:scheme: https]
* h2h3 [:authority: income.m11hrk.infrastructure-testing.np.aws.company.org]
* h2h3 [user-agent: curl/7.87.0]
* h2h3 [accept: */*]
* Using Stream ID: 1 (easy handle 0x7fa10e00d200)
} [5 bytes data]
> GET /oauth2/health/ HTTP/2^M
> Host: income.m11hrk.infrastructure-testing.np.aws.company.org^M
> user-agent: curl/7.87.0^M
> accept: */*^M
> ^M
{ [5 bytes data]
* Connection state changed (MAX_CONCURRENT_STREAMS == 128)!
} [5 bytes data]
< HTTP/2 503 ^M
< date: Tue, 28 Feb 2023 11:50:06 GMT^M
< content-type: text/html^M
< content-length: 190^M
< strict-transport-security: max-age=31536000; includeSubDomains^M
< ^M
} [5 bytes data]
* Connection #0 to host income.m11hrk.infrastructure-testing.np.aws.company.org left intact
HTTP/2 503 ^M
date: Tue, 28 Feb 2023 11:50:06 GMT^M
content-type: text/html^M
content-length: 190^M
strict-transport-security: max-age=31536000; includeSubDomains
How to reproduce this issue: I created 5 threads that were bombarding endpoint with requests. At the same time I was creating services and ingresses with non-existing DNS just to force ingress-nginx process to reload.
Create service and ingress
echo "
apiVersion: v1
kind: Service
metadata:
name: ext-service
namespace: tester
spec:
type: ExternalName
externalName: tester.custom-pre-acc1.np.aws.company.org
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
name: ext-ingress
namespace: tester
spec:
ingressClassName: nginx
rules:
- host: income.m11hrk.infrastructure-testing.np.aws.company.org
http:
paths:
- backend:
service:
name: ext-service
port:
number: 443
path: /
pathType: Prefix
tls:
- hosts:
- income.m11hrk.infrastructure-testing.np.aws.company.org
" | kubectl apply -f -
make a request (5 background threads)
#!/bin/bash
URL="https://income.m11hrk.infrastructure-testing.np.aws.company.org/oauth2/health/"
i=0
while true
do
((i++))
echo "Attempt ${i}"
curl -v -I -s -k -X GET ${URL}
done
put some pressure for ingress-nginx so it will reload
echo "
apiVersion: v1
kind: Service
metadata:
name: ext-service${i}
namespace: tester
spec:
type: ExternalName
externalName: tester${i}.custom-pre-acc1.np.aws.company.org
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: tester
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
name: ext-ingress${i}
spec:
ingressClassName: nginx
rules:
- host: income${i}.m11hrk.infrastructure-testing.np.aws.company.org
http:
paths:
- backend:
service:
name: ext-service${i}
port:
number: 443
path: /
pathType: Prefix
tls:
- hosts:
- income${i}.m11hrk.infrastructure-testing.np.aws.company.org
" > stressIng.yaml
echo "
#!/bin/bash
set -x
i=0
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
while true;
do
((i++))
date
sed "s/\${i}/${i}/g" ${SCRIPT_DIR}/stressIng.yaml | kubectl apply -f -
sleep 3
echo ""
done
" > stress.sh
./stress.sh
About this issue
- Original URL
- State: open
- Created a year ago
- Reactions: 8
- Comments: 18 (9 by maintainers)
I don’t understand or agree with the validity of the tests to begin with. And DNS either
Also there is a curl related info visible in response https://curl.se/libcurl/c/CURLMOPT_MAX_CONCURRENT_STREAMS.html
The CI does test externalName --type service but this could be a corner case, where multiple consecutive reloads are resulting in multiple subsequent unavailability of the nginx daemon for a very small window of time. I am sure we don’t have this kind of test in CI.
Please wait for a developer to look and comment.
@ikarlashov everybody concerned can see this issue.
If you follow this https://kubernetes.github.io/ingress-nginx/developer-guide/getting-started/#testing, you will see some docs on run e2e tests.
I checked and tried to run this test that includes e2e for externalName type service
This fails for me. you can try the same.
So people are looking into it but there are some even higher priority that are really serious problems, that the developers are working on so there is no fixed ETA on when this performance problem with externalName type service will be looked at.
Also this use case is very very unusual, in addition to seeming look like a corner case. If the backend service was of type clusterIP and in the same namespace, then it will be higher importance, than it currently is.