quinn: unexpected low throughput
Hi, Thanks for the hard work.
I am trying to implement a proxy client/server pair using quinn, I get it working quickly thanks to quinn’s succinct APIs, but the throughput is quite low.
The client is running on MacOS, and server on Linux, RTT reported by ping is 170ms, while quinn::Connection::rtt() reports 190ms. I have another C++ proxy implementation using TCP, which runs alot faster with the same network link. I can observe that the server is quite fast at writing data to the client, each write with 6k to 8k bytes for most the time, but the client reads 1k to 3k bytes for each attempt and never catches up with server’s write speed. I tried setting SND_BUF/RCV_BUF up to 3 megabytes, but nothing changes, and I don’t know what to investigate on. I guess there may be some mistakes in my code, The client code for relaying traffic is as the following:
pub async fn serve(&mut self, local_conn_receiver: &mut Receiver<TcpStream>) -> Result<()> {
let remote_conn = &self.remote_conn.as_ref().unwrap();
// accept local connections and build a tunnel to remote for accepted connections
while let Some(local_conn) = local_conn_receiver.recv().await {
match remote_conn.open_bi().await {
Ok((remote_send, remote_recv)) => {
tokio::spawn(Self::handle_stream(local_conn, remote_send, remote_recv));
}
Err(e) => {
error!("failed to open_bi on remote connection: {}", e);
break;
}
}
}
info!("quit!");
Ok(())
}
async fn handle_stream(
mut local_conn: TcpStream,
mut remote_send: SendStream,
mut remote_recv: RecvStream,
) -> Result<()> {
info!("open new stream, id: {}", remote_send.id().index());
let mut local_read_result = ReadResult::Succeeded;
loop {
let (mut local_read, mut local_write) = local_conn.split();
let local2remote = Self::local_to_remote(&mut local_read, &mut remote_send);
let remote2local = Self::remote_to_local(&mut remote_recv, &mut local_write);
tokio::select! {
Ok(result) = local2remote, if !local_read_result.is_eof() => {
local_read_result = result;
}
Ok(result) = remote2local => {
if let ReadResult::EOF = result {
info!("quit stream after hitting EOF, stream_id: {}", remote_send.id().index());
break;
}
}
else => {
info!("quit unexpectedly, stream_id: {}", remote_send.id().index());
break;
}
};
}
Ok(())
}
async fn local_to_remote<'a>(
local_read: &'a mut ReadHalf<'a>,
remote_send: &'a mut SendStream,
) -> Result<ReadResult> {
let mut buffer = vec![0_u8; 8192];
let len_read = local_read.read(&mut buffer[..]).await?;
if len_read > 0 {
remote_send.write_all(&buffer[..len_read]).await?;
Ok(ReadResult::Succeeded)
} else {
remote_send.finish().await?;
Ok(ReadResult::EOF)
}
}
async fn remote_to_local<'a>(
remote_recv: &'a mut RecvStream,
local_write: &'a mut WriteHalf<'a>,
) -> Result<ReadResult> {
let mut buffer = vec![0_u8; 8192];
let result = remote_recv.read(&mut buffer[..]).await?;
if let Some(len_read) = result {
local_write.write_all(&buffer[..len_read]).await?;
local_write.flush().await?;
Ok(ReadResult::Succeeded)
} else {
Ok(ReadResult::EOF)
}
}
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 19 (4 by maintainers)
Hi, @Ralith
I found the culprit which causes stalls in my previous tests, it is
tokio::select!. I usedselect!to schedule code for reading and writing data between TCP connections and Quinn streams, but the code for reading from the TCP streams stalls at times, according to the doc ontokio::select!:I switched to
tokio::spawn!, the issue is gone, and my code with Quinn starts to fly!It was my fault, I didn’t think about that, the comparison was indeed not fair. I re-run the test by putting a TCP server with encryption in front of the original proxy server, now the one with Quinn is ~1.8 times faster than the TCP version, and it costs slightly more CPU than the TCP version, I still don’t have time to run a strict comparison, but from what I’ve seen so far, the Quinn version must be better.
Another thing that surprised me was that the Quinn version with BBR congestion control performed A LOT better than the TCP version. On a network with RTT 160ms and 7.2% packet loss, the TCP version runs at 0.45MiB/s while the Quinn version runs at 1.6MiB/s, this is more than 3 times faster.
At the end, I must say Thank you to you all, the authors of Quinn for the hard work.