mysql_async: Conn::new sometimes fails to make progress under load

Try running the following test:

#[tokio::test(threaded_scheduler)]
async fn can_handle_the_pressure() {
    use futures_util::stream::StreamExt;
    {
        let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<()>();
        for _ in 0..500 {
            let tx = tx.clone();
            tokio::spawn(async move {
                let c = Conn::new(get_opts()).await.unwrap();
                let c = c.drop_query("SELECT 1").await.unwrap();
                c.disconnect().await.unwrap();
                drop(tx);
            });
        }
        drop(tx);
        // see that all the tx's eventually complete
        assert_eq!(rx.next().await, None);
    }
    eprintln!("MID");
    {
        let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<()>();
        for _ in 0..500 {
            let tx = tx.clone();
            tokio::spawn(async move {
                eprintln!("GETCONN");
                let c = Conn::new(get_opts()).await.unwrap();
                eprintln!("GOTCONN");
                let c = c.drop_query("SELECT 1").await.unwrap();
                c.disconnect().await.unwrap();
                drop(tx);
            });
        }
        drop(tx);
        // see that all the tx's eventually complete
        assert_eq!(rx.next().await, None);
    }
    eprintln!("DONE");
}

Against a local MySQL with max_connections = 1000. It passes just fine. Now, run some CPU-heavy program that takes up most of the cores on the machine, and run the test again. It will frequently hang between printing MID and printing DONE. Digging a big deeper, you will see that GETCONN gets printed 500 times, but GOTCONN only N < 500 times (the exact value of N varies). Some further digging reveals that the issue lies in handle_handshake. Specifically, if you put a print statement above and below the call to read_packet().await, you’ll see that the print before gets printed 2*500-N times, and the print after 2*500-2*N times. The explanation for this is that every connection goes through Conn::new twice: once as a “plain” connection, and once as an “upgraded” socket connection. But some (500-N) plain connections get “stuck” on that .await. They therefore do not get upgraded, so they fail to hit the second print twice, and the first print once (as a socket).

In any case, the issue for the stuck connections is that read_packet().await never returns for plain connections. Reading through the code, it appears that there are only three ways this can happen: tokio::net::TcpStream is broken (unlikely), tokio_util::codec::Framed is broken (possible), or mysql_common::proto::codec::PacketCodec either blocks (unlikely) or returns None when it should return Some (a likely culprit). How any of these are related to load, I have no idea. Why it is necessary to first spin up and drop 500 connections, I also have no idea. But that’s what I’m seeing. Which sort of points at a tokio bug, but that seems weird too?

Well, I figured I should write up my findings no matter what, so that hopefully we can get to the bottom of it. I ran into this when running code on EC2 which is infamous for having a lot of scheduling noise, similar to that of a loaded machine, so it’s probably worth trying to fix.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 21 (20 by maintainers)

Most upvoted comments

Note, that system-wide keepalive also fixes the issue 🙃:

изображение

Hi.

I was able to reproduce the issue using the test above. I’ve also tried synchronous mysql driver and it hangs after 5 minutes in my environment (arch+mariadb14 with max_connections=1). Direct UNIX socket connections (using OptsBuilder::socket) are not affected by this issue as far as i can see.

Looks like this issue goes away for both synchronous and asynchronous drivers if keepalive is enabled using OptsBuilder::tcp_keepalive. I’ve measured the delay between invocations of date for 1 and 10 seconds keepalive. That shows, that keepalive messages somehow wakes hanged connections up. Here are the histograms for delays:

histograms

delays for 1sec keepalive

Снимок экрана 2020-02-17 в 1 14 34 PM

delays for 10sec keepalive

Снимок экрана 2020-02-17 в 1 14 45 PM

Stack traces:

stack trace for `mysql_async`

#0  0x00007f6126bab70e in epoll_wait () from /usr/lib/libc.so.6
    (gdb) bt
#0  0x00007f6126bab70e in epoll_wait () from /usr/lib/libc.so.6
#1  0x00005639e390f614 in mio::sys::unix::epoll::Selector::select (self=<optimized out>, evts=0x7f61269621a8, awakener=..., timeout=<error reading variable: Cannot access memory at address 0x0>)
    at /home/set/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.6.21/src/sys/unix/epoll.rs:72
#2  0x00005639e390e135 in mio::poll::Poll::poll2 (self=0x7f61200041c0, events=0x7f61269621a8, timeout=..., interruptible=<optimized out>)
    at /home/set/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.6.21/src/poll.rs:1178
#3  mio::poll::Poll::poll1 (self=0x7f61200041c0, events=0x7f61269621a8, timeout=..., interruptible=<optimized out>)
    at /home/set/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.6.21/src/poll.rs:1139
#4  0x00005639e390dce3 in mio::poll::Poll::poll (self=0x7f6120000e50, events=0x400, timeout=...) at /home/set/.cargo/registry/src/github.com-1ecc6299db9ec823/mio-0.6.21/src/poll.rs:1010
#5  0x00005639e38fcb43 in tokio::io::driver::Driver::turn (self=0x7f61269621a8, max_wait=...) at /home/set/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/io/driver/mod.rs:107
#6  0x00005639e38fd070 in <tokio::io::driver::Driver as tokio::park::Park>::park (self=0x7f6120000e50) at /home/set/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.2.11/src/io/driver/mod.rs:176
#7  0x00005639e38279da in <tokio::time::driver::Driver<T> as tokio::park::Park>::park ()
#8  0x00005639e386ac9c in tokio::runtime::basic_scheduler::SchedulerPriv::tick ()
#9  0x00005639e386bc0a in tokio::runtime::basic_scheduler::BasicScheduler<P>::block_on ()
#10 0x00005639e383eeed in tokio::runtime::context::enter ()
#11 0x00005639e3844f79 in mysql_async::conn::test::too_many_error ()
#12 0x00005639e38a934f in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
#13 0x00005639e39ea72a in __rust_maybe_catch_panic () at src/libpanic_unwind/lib.rs:78
#14 0x00005639e38c4607 in std::panicking::try () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:270
#15 std::panic::catch_unwind () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:394
#16 test::run_test_in_process () at src/libtest/lib.rs:567
#17 test::run_test::run_test_inner::{{closure}} () at src/libtest/lib.rs:474
#18 0x00005639e389daf6 in std::sys_common::backtrace::__rust_begin_short_backtrace () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/sys_common/backtrace.rs:136
#19 0x00005639e38a1e76 in std::thread::Builder::spawn_unchecked::{{closure}}::{{closure}} () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/thread/mod.rs:469
#20 <std::panic::AssertUnwindSafe<F> as core::ops::function::FnOnce<()>>::call_once () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:318
#21 std::panicking::try::do_call () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:292
#22 0x00005639e39ea72a in __rust_maybe_catch_panic () at src/libpanic_unwind/lib.rs:78
#23 0x00005639e38a28a6 in std::panicking::try () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panicking.rs:270
#24 std::panic::catch_unwind () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/panic.rs:394
#25 std::thread::Builder::spawn_unchecked::{{closure}} () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libstd/thread/mod.rs:468
#26 core::ops::function::FnOnce::call_once{{vtable-shim}} () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/libcore/ops/function.rs:232
#27 0x00005639e39d34cf in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
#28 0x00005639e39e98a0 in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
#29 std::sys_common::thread::start_thread () at src/libstd/sys_common/thread.rs:13
#30 std::sys::unix::thread::Thread::new::thread_start () at src/libstd/sys/unix/thread.rs:80
#31 0x00007f6126c9746f in start_thread () from /usr/lib/libpthread.so.0
#32 0x00007f6126bab3d3 in clone () from /usr/lib/libc.so.6

stack trace for `mysql`

#0  0x00007f2d460dc9bc in recv () from /usr/lib/libpthread.so.0
#1  0x00005565c458d8ba in std::sys::unix::net::Socket::recv_with_flags () at src/libstd/sys/unix/net.rs:224
#2  std::sys::unix::net::Socket::read () at src/libstd/sys/unix/net.rs:230
#3  std::sys_common::net::TcpStream::read () at src/libstd/sys_common/net.rs:293
#4  <std::net::tcp::TcpStream as std::io::Read>::read () at src/libstd/net/tcp.rs:573
#5  0x00005565c441474c in <std::io::buffered::BufReader<R> as std::io::Read>::read ()
#6  0x00005565c442dfa9 in <mysql_common::proto::sync_framed::MySyncFramed<T> as core::iter::traits::iterator::Iterator>::next ()
#7  0x00005565c43daa3d in mysql::conn::Conn::read_packet ()
#8  0x00005565c43e2424 in mysql::conn::Conn::connect ()
#9  0x00005565c43d8c9d in mysql::conn::Conn::new ()
#10 0x00005565c4421101 in std::sys_common::backtrace::__rust_begin_short_backtrace ()
#11 0x00005565c4421d99 in std::panicking::try::do_call ()
#12 0x00005565c459cf4a in __rust_maybe_catch_panic () at src/libpanic_unwind/lib.rs:78
#13 0x00005565c4431220 in core::ops::function::FnOnce::call_once{{vtable-shim}} ()
#14 0x00005565c4585a4f in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
#15 0x00005565c459c030 in <alloc::boxed::Box<F> as core::ops::function::FnOnce<A>>::call_once () at /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8/src/liballoc/boxed.rs:1022
#16 std::sys_common::thread::start_thread () at src/libstd/sys_common/thread.rs:13
#17 std::sys::unix::thread::Thread::new::thread_start () at src/libstd/sys/unix/thread.rs:80
#18 0x00007f2d460d246f in start_thread () from /usr/lib/libpthread.so.0
#19 0x00007f2d45fe63d3 in clone () from /usr/lib/libc.so.6

TBH i don’t know why this is happening. I’ll try to hang c++ connector just to make sure that it’s not on mysql side.

As a side note, I do not think this is a MySQL bug, as I have also seen this when working against msql-srv.

This seems strange, because i wasn’t able to reproduce the issue with max_connections > n.

Any thoughts?

Here’s another: for runs of the test that succeed, after MID, I observe 18000 calls to PacketCodec::decode, and 5000 calls to PacketCodec::encode. That is 36 decodes per connection (roughly 18 each for plain and socket presumably), and 10 encodes per connection. On a run that hangs, I see 36 * H fewer calls to decode, and H/2 more calls to encode, where H is the number of hanging Conn::new. This suggests to me that maybe decode is never called for the hanging connections (how could that be?). I have no idea why encode would be called more.

@blackbeam I may need your knowledge of the protocol here to figure out what might be causing this hang. Can you replicate the hang on a computer you control?