go: net/http: connection reuse does not work happily with normal use of json package
Please answer these questions before submitting your issue. Thanks!
What version of Go are you using (go version)?
go 1.8.1
What operating system and processor architecture are you using (go env)?
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
What did you do?
Used json.NewDecoder(resp.Body) to read the body of an http response. The http request was using TLS. This is a playground app demonstrating the issue
What did you expect to see?
I expected the tls connection reused on subsequent requests
What did you see instead?
The tls connection is discarded and a new connection is opened for every request
I also started a thread on the mailing list
About this issue
- Original URL
- State: open
- Created 7 years ago
- Comments: 23 (19 by maintainers)
Okay, here’s what’s happening:
The client side is working as intended after the change for #17355. If it had seen an EOF from read or a 0-byte http/1.1 EOF chunking marker in its buffer, it would’ve used it. But it never sees one:
The http server buffers the output from Handlers up until a certain size, both to minimize to the number of writes on the wire, and to add a Content-Length header for small-ish writes.
The writes are pretty big here, though. At 412 elements (which works), it’s doing a 8042 byte write (pre-chunking+TLS overhead), and at 413 elements (which fails), it’s doing a 8062 byte write.
Those are bigger than all the server’s various buffers (the 2KB http.Handler buffer), and the conn’s buffer. So those get written immediately, before your handler exits, so the server doesn’t know that it’s the final write, so it can’t tack on the EOF chunk yet.
Right after the final big Write to the ResponseWriter from the json.NewEncoder(http.ResponseWriter).Encode(bigMap), the handler exits. This causes the Server to then do another write, to close the chunking layer, and send the final “0\r\n”. But it’s too late, because your client program has already decoded the object (it had everything it needed, pre-EOF), and closed the Response.Body. Because the Response.Body was closed before seeing an EOF, and because it was sent chunked instead of with a content-length, the client has no idea how much might remain, so it closes the connection just as the EOF chunk is arriving.
Note that all of this only happens with certain response sizes, and not even necessarily with all large responses. It just depends on things line up and get flushed out.
There are two possible fixes, possibly both:
in the absence of an explicit http.Flusher.Flush from the Handler, the Server could always hold on to a few buffered bytes, to prevent the client from getting everything in one packet. That would cause the final EOF packet on the wire to contain, say, the final
"}\n"of the JSON object, as well as the0\r\nEOF chunking marker.the Transport could go back to its old behavior (with a better implementation) of waiting for a bit (50 ms?) for the EOF before throwing the connection away, reading at most a few bytes.
But neither of these are critical for Go 1.9, so moving to Go 1.10.
@ianlancetaylor & @dcheney-atlassian this isn’t about calling
Closeon the response body. In fact the sample app on playground closes the body in a defer. This issue is trying to address connection reuse when a chunk reader is being used. Even whenCloseis called, TLS connections aren’t being re-used for subsequent requests since response body isn’t fully drained (as explained in the issue and the golang mailing list)This has previously been an interesting use case for the golang team, e.g. see https://github.com/google/go-github/pull/317 and the fix that was pushed by @bradfitz to fix it. As a result, I was expecting to see the golang team interested in fixing this issue as well, but may be that’s not the case.
Furthermore, a quick search on github shows that lot of other golang developers are expecting this pattern to work without having to drain the response body. After careful examination of the search results, it seems some developers have ran into this issue and are manually draining the response body .
Chiming in from https://golang.org/cl/55613: I would vote against (1) alone because it works only if the server is Go. It’s entirely possible that a non-Go server has the same behavior, which will only lead to us fixing this again on the client. So I think the fix should go in the client, e.g., (2). I’m okay with doing (1) as well if someone has an argument that such a server change would be generally useful for non-Go clients.
Waiting a hard-coded N ms for the final empty chunk makes me a bit uneasy. If we chose N too small, we miss the final chunk, but too large and it would have been faster to open a new connection. Ideally we’d use a “happy eyeballs” approach where we race the final read with a new connection, perhaps starting the new connection dial after a short delay (10ms?). But maybe that is too complex and a hard-coded N ms read is good enough.