fastapi: No objects ever released by the GC, potential memory leak?

First Check

  • I added a very descriptive title to this issue.
  • I used the GitHub search to find a similar issue and didn’t find it.
  • I searched the FastAPI documentation, with the integrated search.
  • I already searched in Google “How to X in FastAPI” and didn’t find any information.
  • I already read and followed all the tutorial in the docs and didn’t find an answer.
  • I already checked if it is not related to FastAPI but to Pydantic.
  • I already checked if it is not related to FastAPI but to Swagger UI.
  • I already checked if it is not related to FastAPI but to ReDoc.

Commit to Help

  • I commit to help with one of those options 👆

Example Code

# From the official documentation
# Run with uvicorn main:app --reload
from fastapi import FastAPI

app = FastAPI()


@app.get("/")
async def root():
    return {"message": "Hello World"}

Description

Use the minimal example provided in the documentation, call the API 1M times. You will see that the memory usage piles up and up but never goes down. The GC can’t free any objects. It’s very noticeable once you have a real use case like a file upload that DoS’es your service.

memory-profile

Here some examples from a real service in k8s via lens metrics:

Screenshot 2022-03-04 at 16 47 18 Screenshot 2022-03-03 at 20 19 22

Operating System

Linux, macOS

Operating System Details

No response

FastAPI Version

0.74.1

Python Version

Python 3.10.1

Additional Context

No response

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 33 (1 by maintainers)

Most upvoted comments

The memory leak in uvicorn is probably not the cause of my issue though. First of all, it only happens with FastAPI >=0.69.0, and I also had apps where that happens where I don’t even use app.state or request.state at all. I think I will put some more effort in isolating a minimal running version with that memory leak for my case. I’ll get back to you if I manage to do that.

I can reproduce it on Python 3.7.13, but it’s not reproducible from 3.8+.

Notes:

  • This issue can be reproduced with pure Starlettte - this means this issue can be closed here.
  • This issue cannot be reproduced on Python 3.8+.
  • This issue cannot be reproduced with Starlette 0.14.2 (version pre-anyio) in Python 3.7.
  • This issue can only be reproduced by Starlette 0.15.0+ on Python 3.7.

I’ll not spend more time on this issue. My recommendation is to bump your Python version.

In any case, this issue doesn’t belong to FastAPI.

I can’t exactly say, my container is limited to 512MiB and the base consumption of my app before was already ~220 MiB, so having an additional 350 MiB and then for it to settle would be well within what I can observe. It’s just that for me prior to 0.69.0, I don’t have any sharp memory increase at all:

fastapi memory different versions

Running on docker in python:3.8-slim-buster, so currently using python 3.8.13.

I didn’t have memory leak issues with fastapi 0.65.2 and uvicorn 0.14.0 in my project before. My update to fastapi 0.75.0 and uvicorn 0.17.6 then caused my container to keep running into memory problems. Fastapi 0.65.2 and uvicorn 0.17.6 together does not appear to have a memory leak for me.

I then did a binary search of different fastapi versions (using uvicorn 0.17.6) to see where the memory leaks first appear. For me, that is version 0.69.0.

Are you using --reload as part of your entrypoint like the comment at the top of the code block indicates? What limits if any are you running with in Kube? Those graphs don’t look like leaks, they look like constant memory usage to me so my interpretation of what might be happening is that upon application instance initialization some objects are being loaded into memory and those are never being released for whatever reason, likely they’re being used by the application itself. These could be connections of various sorts, some data that is getting served, hard to say without seeing the actual production service. Memory leaks usually look like a somewhat constant increase in memory usage until a threshold is breached and then the service is OOM’d.

Cheers @JarroVGIT. Started a discussion there. Used your example, hope you don’t mind.