hyper: Requests to hyper server eventually start failing.
I wrote a basic server application using hyper 0.9.10 that prints the IP, referrer, and body of each request to stdout (which I then redirect into a file):
extern crate hyper;
use std::env;
use std::io::Read;
use hyper::method::Method::Post;
use hyper::server::{Server, Request, Response};
use hyper::header::{Referer, AccessControlAllowOrigin};
use hyper::net::Openssl;
fn main() {
let mut args = env::args().skip(1);
let addr = args.next().unwrap();
let cert = args.next().unwrap();
let key = args.next().unwrap();
let ssl = Openssl::with_cert_and_key(cert, key).unwrap();
let _ = Server::https((addr.as_str(), 443), ssl).unwrap()
.handle(|req: Request, mut res: Response| {
res.headers_mut().set(AccessControlAllowOrigin::Any);
let _ = res.send(b"");
if req.method == Post {
let (addr, _, headers, _, _, mut body) = req.deconstruct();
headers.get::<Referer>().
map(|&Referer(ref referer)| {
let mut buffer = String::new();
let _ = body.read_to_string(&mut buffer);
println!("{:}\t{:}\t{:}", addr, referer, buffer);
});
}
});
}
Everything works as expected initially, but after some number of hours after starting the server, all requests to it begin to fail (and do so very slowly). When this happens, the results of time curl -k ...
look like this:
curl: (35) Unknown SSL protocol error in connection to logbook.pyret.org:443
real 1m43.616s
user 0m0.013s
sys 0m0.000s
Restarting the server application corrects this issue and requests are handled within 100ms, tops.
I’m flummoxed.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Comments: 44 (11 by maintainers)
My sincerest apology - I found reason of failures, it was segfault in another library. I’m sorry for false assumptions without any proofs, excuse me please.
Now here’s a surprise: 3 days into my Rustls evaluation, the process froze again. Everything looked exactly the same as it did with OpenSSL!
I don’t know what this means. Maybe the issue is with Hyper after all, maybe it’s with the operating system, or maybe OpenSSL and Rustls just happen to have the same bug. No idea.
Unfortunately, when I switched to Rustls, I also upgraded to the latest Hyper release, which means I lost all the extra logging from my special Hyper fork. I’ll re-add the logging and will report back once I know more.
At the time of my last post, I deployed a new version with the following changes:
That version froze yesterday. This rules out my suspicion/hope that setting the timeouts would solve the problem.
After a careful examination of the logs I learned some more things:
Worker::handle_connection
(“keep_alive loop ending for …” is logged)Worker
.My plan now is to add more logging to my Hyper fork and deploy that later today. I’ll check back once I learn more.
@e-oz looks like you can pass a
Timeouts
config struct toIron::listen_with
: http://ironframework.io/doc/iron/struct.Timeouts.html