sso: SSO Proxy closes connections even though 200 success is logged
I have configured SSO in kubernetes in front of a service and behind nginx-ingress with proper timeout settings. My service has requests that can be as long as 5 minutes. The logs of sso-proxy says that the requests succeeded (ended in 200) but nginx-ingress says that the upstream (i.e., sso-proxy) prematurely closed the connection.
I don’t believe it is a timeout that breaks the connection because sso-proxy seems to say that everything is ok and all the timeouts are configured for 10 minutes.
Edit: see comment below for detailed traces of http requests
From the logs below, it looks like the problem is coming from SSO, but I’m not clear how to debug this further.
Logs from my service:
:03:07.669 [1c6c89ef6fb5b77d](0383ba8aa92c27d85abee89c0f59355c) INFO c.b.a.s.Backend - Received: POST https://xxxxx.com/api/yyyyy
11:03:07.709 [1c6c89ef6fb5b77d](0383ba8aa92c27d85abee89c0f59355c) INFO c.b.a.s.p.m.Stuff - Starting computing
11:07:30.982 [1c6c89ef6fb5b77d](0383ba8aa92c27d85abee89c0f59355c) INFO c.b.a.s.p.m.Stuff - Finished computing
Logs from sso-proxy:
{"action":"proxy","http_status":200,"level":"info","msg":"","remote_address":"10.132.0.7","request_duration":236941.048871,"request_method":"POST","request_uri":"xxxxx.com/api/yyyyy","service":"sso-proxy","time":"2019-01-31 11:11:28.13111","user":"a@xxxxx.com","user_agent":"Mozilla/5.0 (X11; Linux x86_64; rv:65.0) Gecko/20100101 Firefox/65.0"}
Logs from nginx-ingress:
2019/01/31 11:03:07 [warn] 319#319: *3111 a client request body is buffered to a temporary file /tmp/client-body/0000000005, client: 10.4.1.1, server: xxxxx.com, request: "POST /api/yyyyy HTTP/2.0", host: "xxxxx.com", referrer: "https://xxxxx.com/page"
2019/01/31 11:07:31 [error] 319#319: *3111 upstream prematurely closed connection while reading response header from upstream, client: 10.4.1.1, server: xxxxx.com, request: "POST /api/yyyyy HTTP/2.0", upstream: "http://10.4.1.65:4180/api/yyyyy", host: "xxxxx.com", referrer: "https://xxxxx.com/page"
The ingress has the following annotation:
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
And the SSO upstream configuration has the following:
- service: demo-svc
default:
from: xxxxx.com
to: http://demo-svc.demo-ns.svc.cluster.local:80
options:
# same as nginx proxy-read-timeout above
timeout: 10m
About this issue
- Original URL
- State: open
- Created 5 years ago
- Comments: 17 (5 by maintainers)
👋 @victornoel will look into this today