rusoto: Understanding slow performance a Kinesis client

I am very new to Kinesis and very new to Rusoto, so take this report with a grain of salt! I am writing to a Kinesis topic with 20 shards and I cannot seem to get past 4-7MB/s.

My code can be found here: https://github.com/tureus/log-sloth/blob/master/src/bin/log-sloth.rs#L236-L255

My server receives data on TCP, parses it with nom, and sends out 500 record batches to my Kinesis topic. When I run without outputting to Kinesis, just parsing then dropping on the floor, I can get 300MB/s+. I am also running on a big AWS host, a m4.4xl (16 CPUs, 64GB of RAM). CPU utilization is also very low, the one thread handling the client is pinned a measly 5-15% CPU use. I’m pretty confident the Kinesis client (or more succinctly, how I use it) is my current bottleneck.

I have also written a stress test tool, it just takes a hard-coded test string and sends it out as fast as possible. That’s where I see the 300MB/s+ metric.

I have enabled LTO and I run in release mode. My server seeds a “Pooled stream disconnected” exception but I can just retry my stress test tool and get it flowing again.

From my server:

     Running `target/release/log-sloth server`
INFO:<unknown>: STARTING: thread for client stream=V4(127.0.0.1:49124)
thread '<unnamed>' panicked at 'could not write data to kinesis stream: HttpDispatch(HttpDispatchError { message: "Pooled stream disconnected" })', /checkout/src/libcore/result.rs:906:4
note: Run with `RUST_BACKTRACE=1` for a backtrace.
INFO:<unknown>: STARTING: thread for client stream=V4(127.0.0.1:49136)
INFO:<unknown>: stream=V4(127.0.0.1:49136) done. 697 MB bytes, 1000000 lines
INFO:<unknown>: STOPPING: thread for client ending with Ok(())

From my stress test:

root@rust-459207332-93w7g:/log-sloth# time cargo run --bin stress --release
    Finished release [optimized] target(s) in 0.0 secs
     Running `target/release/stress`
done! wrote 698 MB

real	3m14.344s
user	0m0.260s
sys	0m0.460s

So, it’s slow! I’d say LTO even made it a little slower. Any tips on how to spot the problem?

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 26 (25 by maintainers)

Most upvoted comments

I’m shoveling data now! Seems rock solid:

root@doit-1800981089-sz6zt:~# sar -n DEV 1
Linux 4.4.78-k8s (doit-1800981089-sz6zt) 	01/29/18 	_x86_64_	(16 CPU)

21:37:19        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:20           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:20         eth0  84467.00  58566.00  15069.08 124618.31      0.00      0.00      0.00     72.40

21:37:20        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:21           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:21         eth0  78814.00  57308.00  15065.58 122292.31      0.00      0.00      0.00     71.05

21:37:21        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:22           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:22         eth0  80749.00  57964.00  15358.19 123659.13      0.00      0.00      0.00     71.84

21:37:22        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:23           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:23         eth0  82836.63  59055.45  14704.30 123921.19      0.00      0.00      0.00     71.99

21:37:23        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:24           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:24         eth0  79350.00  58780.00  14918.57 125596.36      0.00      0.00      0.00     72.97

21:37:24        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:25           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:25         eth0  79478.00  57522.00  14595.63 122498.09      0.00      0.00      0.00     71.17

21:37:25        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:26           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:26         eth0  82282.00  57772.00  16344.08 121117.96      0.00      0.00      0.00     70.37

21:37:26        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:27           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:27         eth0  79822.00  59329.00  16336.90 125686.31      0.00      0.00      0.00     73.02

21:37:27        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:28           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:28         eth0  81776.00  58668.00  14897.85 124253.02      0.00      0.00      0.00     72.19

21:37:28        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:29           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:29         eth0  80329.00  57515.00  15557.03 121426.34      0.00      0.00      0.00     70.54

21:37:29        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:30           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:30         eth0  76874.00  58229.00  14466.41 122742.50      0.00      0.00      0.00     71.31

21:37:30        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:31           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:31         eth0  81065.00  57870.00  14747.82 123500.55      0.00      0.00      0.00     71.75

21:37:31        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:32           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:32         eth0  80028.00  58622.00  15549.14 125026.33      0.00      0.00      0.00     72.64

21:37:32        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:33           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:33         eth0  79755.00  58318.00  15719.98 123049.01      0.00      0.00      0.00     71.49

21:37:33        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:34           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:34         eth0  78566.00  57209.00  14526.02 122730.00      0.00      0.00      0.00     71.30

21:37:34        IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
21:37:35           lo      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
21:37:35         eth0  79716.00  58623.00  15091.74 125951.69      0.00      0.00      0.00     73.17

@xrl It‘s been a while, so I might not be up to date, but you should be able to set a breakpoint on rust_panic (break rust_panic in gdb), which should pause on a panic, before any of the actual unwinding happens.