requests: Header "Transfer-Encoding: chunked" set even if Content-Length is provided which causes body to not actually get chunked
Test script
import requests
import time
def f():
yield b"lol"
time.sleep(2)
yield b"man"
requests.post('http://127.0.0.1:8801/', data=f(), headers={"Content-Length": 6})
Actual result
Received on the server:
$ nc -p 8801 -l
POST / HTTP/1.1
Host: 127.0.0.1:8801
User-Agent: python-requests/2.0.0 CPython/3.3.1 Linux/3.11.0-031100rc4-generic
Accept: */*
Transfer-Encoding: chunked
Content-Length: 6
Accept-Encoding: gzip, deflate, compress
lolman
Expected result
Did not expect “Transfer-Encoding: chunked” since I provided the Content-Length. If requests insists on doing chunked transfer encoding, it should disregard the content length and actually chunk the content (as it does if there is not Content-Length header given).
About this issue
- Original URL
- State: closed
- Created 11 years ago
- Comments: 52 (37 by maintainers)
Commits related to this issue
- Add StreamingIterator wrapper See https://github.com/kennethreitz/requests/issues/1648 for reasoning — committed to requests/toolbelt by sigmavirus24 10 years ago
- Merge branch 'streaming-iterator' into 'master' Streaming Iterator See https://github.com/kennethreitz/requests/issues/1648 for some more detailed reasoning. In short, if you have a iterator that y... — committed to requests/toolbelt by sigmavirus24 10 years ago
- Workaround for #1648: Don't chunk post data when content-length is given. — committed to Bluehorn/requests by tlandschoff-scale 9 years ago
@timuralp I remain opposed to adding a flag for this. It’s really just unacceptable to have a HTTP/1.1 implementation that cannot handle chunked transfer encoding in 2016. It’s been a specification requirement for so long that the first specification that required it is nearly old enough to vote in the United States of America: I don’t think we can keep cutting entities slack for not doing it.
From my perspective the bug here remains that we may incorrectly emit both Content-Length and Transfer-Encoding. Of course, my perspective is non-binding. 😉
Currently, providing a generator and specifying a content-length is an error (it generates an invalid http request), so this use case should not be used by anybody. That’s why I was thinking it would not break your user’s programs. Why generators ? Data is not always provided by a file like object. For example, I’d like to watch upload progress by yielding data chunk by chunk, etc. (otherwise I’d have to override read() methods)