ingress-nginx: Ingress hangs request if upstream application not write into response for a long time (185 sec) after success connection
Hello!
Can you please tell me if there is any limitation on the waiting time for data from upstream? If the upstream application send data to the stream every second, request done successfull. But app send a any symbol to response after the connection, and then sleep for more than 3 minutes, and then try to send something else, then the ingress nginx will not send anything to the client and, besides, it will not break the connection with the client.
After many tests with annotations contains word “timeout” I try add other location and try many variants
location ~ ^/slow/(.*)$ {
# I checked these parameters together and separately
keepalive_timeout 375s;
keepalive_requests 0;
lua_socket_connect_timeout 300s;
lua_socket_send_timeout 300s;
lua_socket_read_timeout 300s;
lua_socket_keepalive_timeout 300s;
#
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_connect_timeout 1800;
proxy_read_timeout 1800;
proxy_send_timeout 1800;
send_timeout 1800;
proxy_pass http://s2.meta.vmc.loc:9990/$1?$query_string;
}
NGINX Ingress controller version: v0.35.0
Kubernetes version (use kubectl version
):
Client Version: version.Info{Major:“1”, Minor:“17”, GitVersion:“v1.17.4”, GitCommit:“8d8aa39598534325ad77120c120a22b3a990b5ea”, GitTreeState:“clean”, BuildDate:“2020-03-12T23:41:24Z”, GoVersion:“go1.14”, Compiler:“gc”, Platform:“darwin/amd64”}
Server Version: version.Info{Major:“1”, Minor:“19”, GitVersion:“v1.19.10”, GitCommit:“98d5dc5d36d34a7ee13368a7893dcb400ec4e566”, GitTreeState:“clean”, BuildDate:“2021-04-15T03:20:25Z”, GoVersion:“go1.15.10”, Compiler:“gc”, Platform:“linux/amd64”}
Environment:
- Cloud provider or hardware configuration: Yandex Cloud
- OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
- Kernel (e.g.
uname -a
):
Linux cl1bmio2esoonhfpaeuu-azud 5.4.0-72-generic #80-Ubuntu SMP Mon Apr 12 17:35:00 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
- Install tools: kubectl
- Others: no
What happened:
Ingress hangs request if upstream application not write into response for a long time (3 min 5 sec and slower) after success connection. If no timeout is set in the client, the client script hangs forever. Nginx ingress controller never close connection. If curl send request then all OK, but python/java/php clients has problems.
What you expected to happen:
The request should complete successfully and return all data.
How to reproduce it:
Run simple python flask app in pod with simple k8s service
Dockerfile
# https://hub.docker.com/_/python/
FROM python:3.5-onbuild
ENTRYPOINT python app.py
requirements.txt
requests
flask
app.py
import time
from flask import Flask
app = Flask(__name__)
@app.route("/<sec>")
def hello(sec):
time.sleep(int(sec))
return "Hello World!"
if __name__ == "__main__":
app.run(host="0.0.0.0")
Install the ingress controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.35.0/deploy/static/provider/cloud/deploy.yaml
kubectl -o yaml for service and ingress
App service
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"metaappcontent-demo1","heritage":"metaappcontent-demo1","monitoring":"prometheus"},"name":"metaappcontent-demo1","namespace":"default"},"spec":{"ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":8080},{"name":"metrics","port":8080,"protocol":"TCP","targetPort":8080}],"selector":{"app":"metaappcontent-demo1"},"type":"ClusterIP"}}
creationTimestamp: "2021-04-14T11:09:36Z"
labels:
app: metaappcontent-demo1
heritage: metaappcontent-demo1
monitoring: prometheus
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:heritage: {}
f:monitoring: {}
f:spec:
f:ports:
.: {}
k:{"port":80,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
k:{"port":8080,"protocol":"TCP"}:
.: {}
f:name: {}
f:port: {}
f:protocol: {}
f:targetPort: {}
f:selector:
.: {}
f:app: {}
f:sessionAffinity: {}
f:type: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-04-14T11:09:36Z"
name: metaappcontent-demo1
namespace: default
resourceVersion: "57590018"
selfLink: /api/v1/namespaces/default/services/metaappcontent-demo1
uid: 3500f27b-ab5c-4846-86d3-9d5dba98ead7
spec:
clusterIP: 10.96.218.27
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
- name: metrics
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: metaappcontent-demo1
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod-metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx
ingress.kubernetes.io/retry-non-idempotent: "true"
ingress.kubernetes.io/ssl-redirect: "true"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1","kind":"Ingress","metadata":{"annotations":{"cert-manager.io/cluster-issuer":"letsencrypt-prod-metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx","ingress.kubernetes.io/retry-non-idempotent":"true","ingress.kubernetes.io/ssl-redirect":"true","kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/client-body-buffer-size":"10M","nginx.ingress.kubernetes.io/configuration-snippet":"\n more_clear_headers \"Server\";\n \n","nginx.ingress.kubernetes.io/proxy-body-size":"3072M","nginx.ingress.kubernetes.io/proxy-buffer-size":"10M","nginx.ingress.kubernetes.io/proxy-connect-timeout":"18800","nginx.ingress.kubernetes.io/proxy-max-temp-file-size":"4096m","nginx.ingress.kubernetes.io/proxy-read-timeout":"18800","nginx.ingress.kubernetes.io/proxy-send-timeout":"18800","nginx.ingress.kubernetes.io/send-timeout":"18800","nginx.ingress.kubernetes.io/server-snippet":"client_body_timeout 3600s;"},"labels":{"app":"metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx","heritage":"metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx"},"name":"metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx","namespace":"default"},"spec":{"rules":[{"host":"metaappcontent-demo1.b.xxxxxxxxxxxxxxxxxx","http":{"paths":[{"backend":{"service":{"name":"metaappcontent-demo1","port":{"number":80}}},"path":"/","pathType":"ImplementationSpecific"}]}}],"tls":[{"hosts":["metaappcontent-demo1.b.xxxxxxxxxxxxxxxxxx"],"secretName":"devision-letsencrypt-tls-metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx"}]}}
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/client-body-buffer-size: 10M
nginx.ingress.kubernetes.io/configuration-snippet: "\n more_clear_headers
\"Server\";\n \n"
nginx.ingress.kubernetes.io/proxy-body-size: 3072M
nginx.ingress.kubernetes.io/proxy-buffer-size: 10M
nginx.ingress.kubernetes.io/proxy-connect-timeout: "18800"
nginx.ingress.kubernetes.io/proxy-max-temp-file-size: 4096m
nginx.ingress.kubernetes.io/proxy-read-timeout: "18800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "18800"
nginx.ingress.kubernetes.io/send-timeout: "18800"
nginx.ingress.kubernetes.io/server-snippet: client_body_timeout 3600s;
creationTimestamp: "2021-06-29T18:30:02Z"
generation: 4
labels:
app: metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx
heritage: metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx
managedFields:
- apiVersion: networking.k8s.io/v1beta1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:loadBalancer:
f:ingress: {}
manager: nginx-ingress-controller
operation: Update
time: "2021-06-29T18:30:16Z"
- apiVersion: networking.k8s.io/v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:cert-manager.io/cluster-issuer: {}
f:ingress.kubernetes.io/retry-non-idempotent: {}
f:ingress.kubernetes.io/ssl-redirect: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:kubernetes.io/ingress.class: {}
f:nginx.ingress.kubernetes.io/client-body-buffer-size: {}
f:nginx.ingress.kubernetes.io/configuration-snippet: {}
f:nginx.ingress.kubernetes.io/proxy-body-size: {}
f:nginx.ingress.kubernetes.io/proxy-buffer-size: {}
f:nginx.ingress.kubernetes.io/proxy-connect-timeout: {}
f:nginx.ingress.kubernetes.io/proxy-max-temp-file-size: {}
f:nginx.ingress.kubernetes.io/proxy-read-timeout: {}
f:nginx.ingress.kubernetes.io/proxy-send-timeout: {}
f:nginx.ingress.kubernetes.io/send-timeout: {}
f:nginx.ingress.kubernetes.io/server-snippet: {}
f:labels:
.: {}
f:app: {}
f:heritage: {}
f:spec:
f:rules: {}
f:tls: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-07-06T08:29:48Z"
name: metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx
namespace: default
resourceVersion: "105225874"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx
uid: c102c711-af78-4572-847b-211efbb68579
spec:
rules:
- host: metaappcontent-demo1.b.xxxxxxxxxxxxxxxxxx
http:
paths:
- backend:
serviceName: metaappcontent-demo1
servicePort: 80
path: /
pathType: ImplementationSpecific
tls:
- hosts:
- metaappcontent-demo1.b.xxxxxxxxxxxxxxxxxx
secretName: devision-letsencrypt-tls-metaappcontent-demo1-metaappcontent-demo1-b-xxxxxxxxxxxxxxxxxx
status:
loadBalancer:
ingress:
- ip: xxxxxxxxxxxxxxxxxx
/help
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 50 (26 by maintainers)
Hi there, one of the Kubernetes Slack admins here. Unfortunately the built-in signup system in Slack doesn’t offer all the functionality we need so you have to go to https://slack.k8s.io to register.
I have informed slack-admins that you can not signup
Thanks, ; Long
On 09/07/21 2:54 pm, Artur Geraschenko wrote: