caddy: context canceled and 502
1. What version of Caddy are you using (caddy -version)?
0.10.10
2. What are you trying to do?
Proxy an express app.
Seems like caddy responds with 502 and complains when I write the request and close the socket for writing. I’m still available for reading, so caddy should send me the proper response.
Note that request gets proxied just fine (I can see in webapp’s logs). If I enter request in nc manually without closing the stdin then I get the right response.
3. What is your entire Caddyfile?
localhost:80
tls off
log stdout
errors stderr
proxy /media imageserver
proxy /api server:3200 {
keepalive 0
}
4. How did you run Caddy (give the full command and describe the execution environment)?
Docker, abiosoft/caddy image.
5. Please paste any relevant HTTP request(s) here.
➜ ~ cat request.txt
POST /api/graphql HTTP/1.1
Host: localhost:3200
Content-Length: 132
Content-type: application/json
Authorization: Bearer R+VNcynE/pbt5lVgLVGlFVhxQpj5DOjs4dNPWqeGHFGPz41XdD7RFQjbrYDoPesA
{"operationName":"currentUser","variables":{},"query":"query currentUser {\n viewer {\n id\n name\n __typename\n }\n}\n"}
➜ ~ nc localhost 3200 < request.txt
HTTP/1.1 502 Bad Gateway
Content-Type: text/plain; charset=utf-8
Server: Caddy
X-Content-Type-Options: nosniff
Date: Tue, 17 Oct 2017 21:06:29 GMT
Content-Length: 16
502 Bad Gateway
6. What did you expect to see?
A proper JSON response.
7. What did you see instead (give full error messages and/or log)?
gateway_1 | 172.18.0.1 - - [17/Oct/2017:21:04:30 +0000] "POST /api/graphql HTTP/1.1" 502 16
gateway_1 | 17/Oct/2017:21:04:30 +0000 [ERROR 502 /api/graphql] context canceled
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Reactions: 7
- Comments: 16 (4 by maintainers)
This can also be forced upon caddy via browser requests. Imagine a backend serving this simple php at localhost:8081 :
And a simple Caddyfile like this:
[ERROR 502 /] context canceledmessage in the log.502 bad gatewayand[ERROR 502 /] no hosts available upstreamin the caddy log. This will persist until fail_timout has passed since the first fail. In the log of the backend (NGINX), I only get a HTTP 499.2018-03-03T14:24:05+00:00 127.0.0.1 - - localhost/localhost "GET / HTTP/1.1" 499 4.048 374 0 29 1 "-" "-" "Mozilla/5.0 (X11; Linux x86_64; rv:58.0) Gecko/20100101 Firefox/58.0"So it seems caddy is misinterpreting the client “error” for a backend error.In my case I’m currently using a
fail_timeout 0as a workaround or else all my sites go down for fail_timeout. @mholt This sounds pretty much exploitable for DDoS attacks, doesn’t it?Caddy Version: 0.10.11 (and build from git https://github.com/mholt/caddy/commit/5552dcbbc7f630ada7c7d030b37c2efdce750ace).
@mholt: It is enough that one evil client will repeatly make request and immediately close connection and caddy will stop proxing for defined fail_timeout time. Even if you set max_fails higher than 1, only one client could easily do cancel that many requests in row. so it is not even DDoS, but more severe DoS attack.