envoy: 413 error payload too large when use nginx and envoy together

Title: 413 error payload too large when use nginx and envoy together when posting a file that is bigger than 1Mb. If I post the file to nginx or envoy by itself, the problem doesn’t exist

Description: I deployed the simple app from https://github.com/rayh0001/gs-spring-boot-docker to my k8s cluster in ibm cloud. I also have istio running in my k9s cluster.

When i send the curl command to the service with a post of a file over 1MB, I’m getting 413.

$ curl -svo  /dev/null -F 'file=@405217.pdf' mycluster.us-east.containers.mybluemix.net/scan -H 'transactionId: 11111'  -H 'Cache-Control: no-cache'
*   Trying 169.60.83.14...
* TCP_NODELAY set
* Connected to mycluster.us-east.containers.mybluemix.net (169.60.83.14) port 80 (#0)
> POST /scan HTTP/1.1
> Host: mycluster.us-east.containers.mybluemix.net
> User-Agent: curl/7.54.0
> Accept: */*
> transactionId: 11111
> Cache-Control: no-cache
> Content-Length: 1449466
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=------------------------c65bbc05407833b1
>
< HTTP/1.1 100 Continue
} [154 bytes data]
< HTTP/1.1 413 Payload Too Large
< Date: Sat, 24 Mar 2018 00:32:08 GMT
< Content-Type: text/plain
< Content-Length: 17
< Connection: keep-alive
* HTTP error before end of send, stop sending
<
{ [17 bytes data]
* Closing connection 0

Chatted with @PiotrSikora who suggested to tweak per_connection_buffer_limit_bytes to workaround this issue. I developed a pilot webhook (https://github.com/linsun/istioluawebhook) and the issue is resolved when the webhook is running.

I wanted to open this issue to see if this is something envoy can fix, e.g. Envoy should slow-down reading from the wire once it exceeds soft watermark. As a user, I’d prefer this is handled dynamically by envoy.

  • Logs *:
[2018-03-13 01:39:55.026][350][debug][http] external/envoy/source/common/http/conn_manager_impl.cc:1310] [C36][S17322680420845703538] request data too large watermark exceeded

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 18 (14 by maintainers)

Most upvoted comments

You can also write the filter such that it only buffers up till the decision is made, and then starts streaming. That is how our internal Lyft auth filters work, as well as the ratelimit filter in the public repo. But you will need to increase the buffer level to some amount that is mostly safe to account for service latency.

FWIW, I spent some time looking into this last week, but I couldn’t replicate it myself, even with something as ridiculous as per_connection_buffer_limit_bytes: 1024.

cc @alyssawilk