hyper: How do you write a resilient HTTP Hyper server that does not crash with "too many open files"?
The Hello world example at https://hyper.rs/ is vulnerable to denial of service attacks if the max number of allowed open file descriptors is not high enough. I was in contact with @seanmonstar already about this and he does not think this is a security issue, so I’m posting this publicly.
Steps to reproduce:
- Implement the Hello world example from https://hyper.rs/
- Set a very low file descriptor limit to 50 to provoke the crash early:
ulimit -n 50
- Start the Hello world server:
cargo run
- Open another shell and attack the server with Apache bench (100 concurrent requests):
ab -c 100 -n 10000 http://localhost:3000/
That will crash the server with an IO Error Io(Error { repr: Os { code: 24, message: "Too many open files" } })
.
A naive solution is to just restart the server all the time with a loop:
fn main() {
loop {
let addr = "127.0.0.1:3000".parse().unwrap();
let server = Http::new().bind(&addr, || Ok(Proxy)).unwrap();
match server.run() {
Err(e) => println!("Error: {:?}", e),
Ok(_) =>{},
};
}
}
Which is not at all ideal because there is a downtime for a short period of time and all connections from clients are reset.
I checked the behavior of other server software, Varnish in this case. With a low file descriptor limit it just waits until it has descriptors available before accepting connections.
Can Hyper do the same? How do you run your Hyper servers in production to prevent a server crash when file descriptors run out?
About this issue
- Original URL
- State: closed
- Created 7 years ago
- Comments: 25 (17 by maintainers)
Using tk-listen I can now mitigate the problem and the server does not crash anymore when it has only a few file descriptors with
ulimit -n 50
. Yay!Here is the full source for a resilient Hyper echo server:
Now calling
ab -c 1000 -n 100000 http://localhost:3000/
in a new shell works, but it does not finish. The last ~200 of 100k requests never finish and at some point ab exists withWhile ab is in progress I can successfully reach the server manually in my browser, so this might not be a big problem. And it is certainly an improvement by not crashing the server 😃
@seanmonstar: what do you think of using tk-listen as a dependency in Hyper and patching server/mod.rs to do something similar?
An alternative solution to this is using std::net::TcpListener to accept connections, as in the Tokio multi threaded server example: https://github.com/tokio-rs/tokio-core/blob/master/examples/echo-threads.rs
Advantages:
The downside is that you have more code in your server that you need to reason about and maintain.
The tinyhttp example in tokio-core has the same vulnerability, also filed an issue there.
Maybe this is not quite to the point, but the thread reminded me of https://crates.io/crates/tk-listen
Just to clarify tailhook’s comment: this code right here will spin the accept loop hard, since the EMFILE error doesn’t remove the socket from the acceptor’s queue. You might want to sleep the thread for a few milliseconds or something there.
What about filtering out
Err
s?I’m trying the
or_else()
future as @carllerche recommended.Starting from:
Attempt 1: just insert the or_else() and see what the compiler tells us:
I was hoping the compiler would give me a hint about the return type I have to produce in my closure, but that is not helpful. I’m not using
()
anywhere, so what is she talking about? Looking at the docs at https://docs.rs/futures/0.1.16/futures/stream/trait.Stream.html#method.or_else there is no example and the type only says I need to return anU
which isIntoFuture<Item = Self::Item>
.Attempt 2: Return an empty Ok tuple as the for_each() does
Attempt 3: Return an Err:
At least it compiles!!!
But it does not solve the problem: returning an error here bubbles up and crashes my server as before. With the only difference of the additional print statement.
Attempt 4: Return an empty future, assuming it does nothing and
Incoming
continues with the next connection attempt:That compiles, but as soon as the first IO error happens the server does not respond anymore. Looking at the docs: https://docs.rs/futures/0.1.16/futures/future/struct.Empty.html it says “A future which is never resolved.”. Aha, so that is probably blocking my server. So this is not really an Empty future and should be renamed to “AlwaysBlockingDoingNothing”.
Attempt 5: Let’s try the
or_else()
after thefor_each()
:This compiles, but does not swallow the error. The server still crashes except for the additional print statement.
At this point I’m running out of ideas. How can I swallow the IO error and make the
incoming
future continue?@seanmonstar that contract isn’t actually accurate.
Stream
returningErr
is implementation specific. See: https://github.com/alexcrichton/futures-rs/issues/206.The
Incoming
stream is intended to allow polling after an error is returned.None
represents the final state.