leptos: possible memory leak in server integrations, origins unclear
Context: #581
Removed all API use in valeralabs/web and hardcoded a user profile into users.rs. Running a stress test for 120 seconds:
Concurrency Level: 10
Time taken for tests: 120.001 seconds
Complete requests: 904656
Failed requests: 0
Total transferred: 4358651880 bytes
HTML transferred: 4260043940 bytes
Requests per second: 7538.75 [#/sec] (mean)
Time per request: 1.326 [ms] (mean)
Time per request: 0.133 [ms] (mean, across all concurrent requests)
Transfer rate: 35470.55 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 2
Processing: 0 1 0.5 1 23
Waiting: 0 1 0.5 1 23
Total: 0 1 0.5 1 23
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 2
80% 2
90% 2
95% 2
98% 2
99% 3
100% 23 (longest request)
The memory usage of the web server increased from ~5MB to 357MB, and did not decrease after the benchmark. This may be due to #529 somewhere. I’m not sure where this issue is coming from yet.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 20 (8 by maintainers)
Commits related to this issue
- fix: memory leak in streaming SSR (closes issue #590) — committed to leptos-rs/leptos by gbj a year ago
- fix: memory leak in streaming SSR (closes issue #590) (#601) — committed to leptos-rs/leptos by gbj a year ago
- fix: memory leak in streaming SSR (closes issue #590) (#601) — committed to leptos-rs/leptos by gbj a year ago
Beautiful! And thanks for raising it… I feel much more confident that we’re not doing anything wrong, having worked through it all, than I did before.
Thanks — That particular issue was before I implemented proper management for runtimes, which I’ve confirmed are now being disposed properly, so it’s a red herring I think. But having narrowed it down to the streaming implementation is very helpful, and I can dig in a little more later.