uvicorn: Sporatic errors with nginx connection pipelining
There seems to be some weird issue with pipelining and or connection reuse and nginx. I have nginx sitting in front of my uvicorn/starlette app, and on patch/post requests I occasionally get 502’s from nginx (about 30-40% of the time). The nginx logs say:
upstream prematurely closed connection while reading response header from upstream
all over the place
and my app logs say:
WARNING: Invalid HTTP request received.
WARNING: Invalid HTTP request received.
with not much more information.
I forced connection reuse to not work by adding the default header Connection: close, which forces nginx to kill the connection. Performance drops significantly, but at least I don’t get 502’s.
uvicorn.run(app, host='0.0.0.0', port=8445, headers=[('Server', 'custom'), ('Connection', 'close')], proxy_headers=True)
potentially relevant: the 502 doesn’t seem to ever happen on a GET
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 14
- Comments: 67 (23 by maintainers)
I did some tcpdump’ing and found that uvicorn does not send the tcp-FIN packet to nginx and so nginx assumes the connection is still useable. nginx then sends the next http request to the open socket but uvicorn replies with a tcp-FIN packet and logs the invalid request error message. So uvicorn did not have an open connection??? Even though it did not send the FIN…
Workarounds:
gunicorn -k uvicorn.workers.UvicornWorker -c /gunicorn_conf.py main:app --keep-alive=0gunicorn -k uvicorn.workers.UvicornH11Worker -c /gunicorn_conf.py main:appBut I prefer to have reuse of connection available and working. Still investigating how to get that working.
If people impacted could look at their logs (if running 0.13.2 🍏 ) and tell us if they see something about a potentially malformed request that would help
note to self, reproduction steps, hopefully what we see here
docker run --rm --name nginx_uvicorn -p 80:80 -v /tmp/uvicorn.sock:/tmp/uvicorn.sock -v $PWD/nginx.conf:/etc/nginx/nginx.conf:ro nginxwith quite a minimal nginx.confrun uvicorn :
uvicorn app:app --uds /tmp/uvicorn.socksend incorrect request with
curl -i 'http://localhost/?x=y z'client result:
nginx result:
uvicorn result:
Encountering the same issue with a similar configuration, however, I also noticed in the deployment instructions there is mention of a
--proxy-headersflag that should be set, but it appears I can’t do that while also using Gunicorn?Are there other steps I can take to address this issue?
Additional details:
GET,DELETEandPUTwork correctly.POSTandPATCHfail (exactly) half the time.Headers:
Here is a link to the docker image we are using for the deployment as well.
Removed GUnicorn and tried running just uvicorn for testing and I’m still facing the same issue:
ok I’m closing this long-standing issue, we think https://github.com/encode/uvicorn/pull/1263 fixed it, and since then we received no complains so 😃
I simply put the URL parameter in the body of the request, nothing fancy.
I was looking at old PR tonight @florimondmanca… it seems like https://github.com/encode/uvicorn/pull/205 is relevant here, we dont answer a 400 in case of an invalid request like gunicorn or hypercorn and apparently at the reverse-proxy level it turns into a 502, might be worth resurrecting that PR ?
Hey everyone,
I think what might help move this forward would be having Uvicorn log some more information other than a rather unhelpful
Invalid HTTP request receivedmessage.I filed #886 that would turn those messages into much more helpful full traceback dumps:
Upon approval we can merge and release in a patch release, then you’d be able to try that version out in production and see what the actual error / cause is. Maybe that’ll be that “fragmented URL TCP packets” error from #778, maybe that’ll be something else, or a combination… We’ll see. 😃
We have been having some of the errors described in this thread – intermittent
WARNING: Invalid HTTP request received.from uvicorn immediately followed by aH13 Connection closed without responsefrom Heroku.Disabling Keep-Alive or switching to h11 both seems to mitigate the issue.
The solution we came up with is in PR https://github.com/encode/uvicorn/pull/778.
You can try this out with
pip install -e 'git+git@github.com:eduflow/uvicorn.git@on_url-called-multiple-times#egg=uvicorn'I updated our setup to use HTTP1.1, and it had no effect.
I figured out the root cause was actually true timeouts in our container, following requests weren’t making it to uvicorn logs because the container had actually halted execution. So my situation was not actually related to this issue.
I think you quite literally just need an NGINX instance set up with a reverse proxy to reproduce. Send POST/PUT/PATCH to get the error.
If you’re not able to reproduce it like that let me know and I’ll set you up a Dockerfile with a minimal example.