actix-web: Memory Leak on Actix 3.2.0

I’m creating a simple API and put it under stress using loadtest. The API consumes 3MB when idling and not under stress. With the command: loadtest -c 1000 --rps 10000 http://localhost:4000/hello/world the memory consumption goes to ~100MB.

Expected Behavior

After the stress test done, the memory consumption goes back to a normal level (~3MB)

Current Behavior

After the stress test done, the memory consumption doesn’t decrease.

Possible Solution

Sorry I can’t find any solution for this.

Steps to Reproduce (for bugs)

  1. Implement a simple API and call the exposed route.

Context

I tried to make a benchmark of an API made in Rust vs a one in Spring, and while the rust version is memory efficient, it does show some leaks that are problematic.


Code used for exposing the route.

use actix_web::{middleware, web, App, HttpServer, Responder, HttpResponse};

async fn hello_world(
    name: web::Path<String>
) -> impl Responder {
    HttpResponse::Ok().json(format!("{}", name))
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {

    println!("Launching server...");

    std::env::set_var("RUST_LOG", "actix_web=info,actix_server=info");
    env_logger::init();
    dotenv::dotenv().ok();

    let port = match std::env::var("PORT") {
        Ok(port) => port,
        _ => String::from("4000")
    };
    let address = format!("0.0.0.0:{}", port);

    let http_server = HttpServer::new(move || {
        App::new()
            // .wrap(middleware::Logger::default())
            .route("/hello/{name}", web::get().to(hello_world))
    })
        .bind(address.clone())?;

    println!("Will listen to {}", address);

    http_server
        .run()
        .await
}

Using LeakSanitizer it does confirm that there’s a little problem:

==68512==ERROR: LeakSanitizer: detected memory leaks
Direct leak of 256 byte(s) in 8 object(s) allocated from:
    #0 0x564cae1e4255  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0xd6255)
    #1 0x564cae2f830b  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0x1ea30b)

Direct leak of 192 byte(s) in 8 object(s) allocated from:
    #0 0x564cae1e4255  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0xd6255)
    #1 0x564cae40bf8b  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0x2fdf8b)

Indirect leak of 8192 byte(s) in 8 object(s) allocated from:
    #0 0x564cae1e4255  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0xd6255)
    #1 0x564cae2f830b  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0x1ea30b)

Indirect leak of 32 byte(s) in 8 object(s) allocated from:
    #0 0x564cae1e4255  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0xd6255)
    #1 0x564cae565a5b  (/home/lperreau/workspace/hello-world/target/x86_64-unknown-linux-gnu/debug/hello_world+0x457a5b)

SUMMARY: LeakSanitizer: 8672 byte(s) leaked in 32 allocation(s).

Your Environment

  • Rust Version (I.e, output of rustc -V): rustc 1.45.1 (c367798cf 2020-07-26)
  • Actix Web Version: 3.2.0

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 41 (33 by maintainers)

Most upvoted comments

https://github.com/actix/actix-web/pull/1889 With this PR all Box::leak would be removed from actix-web. When it’s merged there will not be any intentional leaked memory.

It does not fix the memory usage issue but would reduce some noise when profiling the memory.

https://github.com/actix/actix-web/pull/1929

This PR tries to address the memory usage problem related to this issue. It would be appreciated you guys can help out by test it and give a report on if it reduce your memory foot print under heavy workload.

I’m not certain if it’s the cause and a proper fix for the problem so please don’t get your hope high. And you would still encounter small amount of memory bloat as the cache is still there working. I considered it working as intended and could be possibly addressed in future.

It would be appreciated you guys can help out by test it and give a report on if it reduce your memory foot print under heavy workload.

I can confirm that the latest betas behave a lot better than the latest stable wrt to memory consumption.

Alright I’ve done some test with a similar hello world app.

use actix_web::{get, web, App, HttpServer, Responder};

#[get("/{name}")]
async fn index(web::Path(name): web::Path<String>) -> impl Responder {
    format!("Hello {}", name)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| App::new()
        .service(index))
        .bind("127.0.0.1:8080")?
        .run()
        .await
}

I used Bombardier for testing on a PC with an Intel i5 3570k (4c/4t) CPU and 16GB DDR3 RAM.

Low concurrency, over 3m 46s. .\bombardier-windows-amd64.exe -k -c 50 -n 10000000 http://localhost:8080/asd Start RAM: 3.1MB End RAM: 3.8MB Peak: 5.4MB

High concurrency over 4m 6s. .\bombardier-windows-amd64.exe -k -c 1000 -n 10000000 http://localhost:8080/asd Start RAM: 3.1MB End RAM: 8.1MB Peak: 46MB

I didn’t get the massive 120MB RAM usage that McFloy got, but at the same time the server did not return to the original RAM usage. I’ll post results from a more complex API when I’ve actually built my application.

actix-web uses cache internally for certain objects to reduce memory allocation. So the startup memory usage normally would not match what you end up with after a stress test. Can you try multiple tests to see if the memory usage would keep going up instead of stabilize at a certain range?

I separated the binaries as suggested.

@thalesfragoso After some test I find the awc client is making up near half of the memory usage. With latest master branch your server memory usage lands at 250mb with 10 thousand connection and 20 rounds for me. awc would use 170mb.

The issue is both in server and client but at the same time server does not take that much memory as I imagine. The lack of recycle memory is still there.

@fakeshadow I created a small stress test based on the websockets example. It just connects a bunch of clients and then disconnects them and repeats…

Here on my setup I got the following results using 10000 clients per round:

  • Current release: Memory seems to keep increasing indefinitively with each round, after ten rounds it sits at around 807MB.
  • Betas: Memory usage seems to stabilize at around 434MB on round ~7, staying around that value for the rest of the test (20 rounds total).

Another thing that I noticed is that the test is way faster on the betas, it actually takes a considerable amount of time to run 10 rounds on the current release.

The thing is local work load generator is never a good indicator of how your real work load would be.

True, but I don’t think people generally use it to simulate real loads. They use it to simulate worst case scenarios. Such as DOS attacks, or “hug of death” from sites like Reddit. I understand that there might be other bottlenecks in the network or server hardware that prevents such loads from happening in some cases though.

For most entry level server your machine physically can’t handle that amount of concurrent requests to actually achieve the memory consumption like OP have.

I guess I’ll have to test it somehow. I don’t know enough about Actix to understand if the same problem applies if you only have for example 10 requests per second during a long period of time.

It does bother me though that such a small API was able to allocate 120MB permanently, until you manually shut down the server.