runtime: HttpWebRequest.ReadWriteTimeout is ignored - reading never times out
.NET Core 2.1.7 on Windows 7 x64
var httpWebRequest = (HttpWebRequest)WebRequest.Create("http://mirror.zetup.net/ubuntu-cd/18.04.1/ubuntu-18.04.1-desktop-amd64.iso");
httpWebRequest.Timeout = 2000;
httpWebRequest.ReadWriteTimeout = 1;
var webResponse = httpWebRequest.GetResponse();
var responseStream = webResponse.GetResponseStream();
int totalBytes = 0, readBytes;
do
{
var buffer = new byte[16384];
readBytes = responseStream.Read(buffer, 0, buffer.Length);
totalBytes += readBytes;
} while (readBytes != 0);
Based on the docs I would expect the responseStream.Read()
call to time out, but it never does. responseStream.CanTimeout
returns false - so at least it’s honest!
In this case the entire file (1953349632 bytes) is eventually downloaded, but in a production case we have the server has not closed the connection for nearly a month now and the client (.NET Core 2.0) is still trying to read from it!
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Reactions: 5
- Comments: 16 (7 by maintainers)
Finally, we tackle this issue. I’ve been saying this for years as even .NET Core 1.1 had inconsistent timeout behavior between implementations of HTTP handlers (Curl / wininet), and I thought the new API along with SocketsHTTPHandler was going to be the end-all solution.
Anyway, what @loop-evgeny is saying is the same issue we face. Reading large chunks of data is a common operation and if there is a sudden disconnect (no RST/FIN packets in the TCP layer) then the read operation will hang.
The immediate solution would seem to be calling the SendAsync/GetAsync with a cancellation token, but unfortunately, that moves the handling of those timeouts to the consumer. To make matters worse, write operations are often NOT called by the producer but handled entirely by an underlying client. The Amazon AWS SDK S3 client is a good example where a connection will hang forever on a network disruption that does not result in a RST/FIN packet in the underlying network layer.
I’m not sure what you mean by “more-fine grained” control, since we currently seem to have no control at all.
The scenario is that we try to download a file from a server (which we do not control), the connection is established quickly enough, the server starts sending data, later stops sending data, but keeps the connection open. The Kestrel server code I posted demonstrates this nicely (just replace the
Thread.Sleep
with an infinite wait). We have such a case in production where the connection has been open since 13-Dec-2018. I would expect it to throw an exception after the specified timeout, so our application can catch that and retry later.What are the “other” timeouts? There are none.
It seems thbat particular timeout is not available on
HttpClient
. Can you use the other timeouts instead? What is your scenario where you need to set theReadWriteTimeout
- is it just about more fine grained control over detection of “dead” connections?Just tested it using a Kestrel server, configured like this:
Reading from HttpClient’s response stream does not time out. So this does not solve the original problem.
OK, I read the discussion in the linked issue:
Err… yeah! It’s hard for me to see how you reached the conclusion that a library silently ignoring what the user code explicitly tells it to do is better than refusing to do it (by throwing).
Anyway, regardless of how it’s handled, the lack of any read timeout is a pretty severe limitation, to say the least. I can’t see how a severe bug like this, that allows code to just get “stuck” forever, made it out of beta. I really hope to see this fixed properly and I fear that, if ReadWriteTimeout is documented to be a no-op, that will reduce the chances of it ever happening.
Is there a reasonable workaround for this? Is there an HTTP client class in .NET Core that does support timeouts, properly?