channels: Memory leak?

Hello! On the production service we faced with the problem of the constant infinite growth of memory consumption.

To reproduce the problem simple project was created (https://github.com/devxplorer/ws_test).

We test it with script scripts/ws_test.py, which makes opening a connection, sending a message and closing the connection. For diagnostics we used memory_profiler package. An examples of the resulting graphs when running a test script can be found in the plots/ folder.

Local environment (where the tests was made):

  • Linux Mint 18.3 Sylvia
  • Python 3.5.2 [GCC 5.4.0 20160609] on linux
  • versions of the packages can be found in the requirements.txt

The conclusions that we have been able to do:

  • changing the asgi server does not change the result, memory consumption continues to grow (daphne, uvicorn, gunicorn + uvicorn were tested);
  • periodic run of gc.collect() does not change the picture.

Perhaps when creating/closing connections, there is some kind of memory leak in channels?. Or there is some mistake in our code that we are missing. Maybe someone faced with a similar problem and it turned out to be solved? Any help would be appreciated.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Reactions: 5
  • Comments: 24 (8 by maintainers)

Commits related to this issue

Most upvoted comments

Everything is looking good after a few days in prod: annotated

We went from using all the memory on the server (26gb) to 6.7gb with the default number of ASGI_THREADS. I found that the number of ASGI_THREADS is abnormally high if you leave it at the defaults. It was something like 70 threads. I lowered it to 8 and we are now at 2.1 GB for 8 processes with 8 threads each.

Great. channels_redis is released already and I’ll try to get other releases out this week.

I’ve been running it for a few days. Two things stand out in the report:

My second biggest reference count with 5000 references is a collections.deque with data passed to channel_layer.group_send: <class 'collections.deque'> deque([{'type': 'job_broadcast_create', 'data':... lots of other group_send data omitted for privacy

My biggest object as far as bytes used is another Channels object. It has 1184 references and uses 73832 byes (not including the objects inside of it): defaultdict(<class 'asyncio.queues.Queue'>, {'specific.cKYmvBaO!XOAFbwMApipM': <Queue at 0x7f4a6c7eac88 maxsize=0>, 'specific.cKYmvBaO!QxpeLzCcGnHg': <Queue at 0x7f4a6c87d668 maxsize=0>, 'specific.cKYmvBaO!AxVBjYBeIBFE': <Queue at 0x7f4a5c2c3dd8 maxsize=0>, 'specific.cKYmvBaO!bFTGCUjMOdhE': <Queue at 0x7f4a381a7828 maxsize=0>, 'specific.cKYmvBaO!blweWDEhIFnr': <Queue at 0x7f4a186aea20 maxsize=0>, 'specific.cKYmvBaO!JsrCNYkbTTwF': <Queue at 0x7f49f834fba8 maxsize=0>, 'specific.cKYmvBaO!puldPELdVsiy': <Queue at 0x7f49d844bd68 maxsize=0>, 'specific.cKYmvBaO!jPUSZpNbJLEK': <Queue at 0x7f49984bdcc0 maxsize=0>, 'specific.cKYmvBaO!lTCVRsMHXxTh': <Queue at 0x7f4978724e80 maxsize=0>, 'specific.cKYmvBaO!tUSzbyFltYKW': <Queue at 0x7f4998213c18 maxsize=0>, 'specific.cKYmvBaO!ZhDECimxvZUo': <Queue at 0x7f49383b85c0 maxsize=0>, 'specific.cKYmvBaO!nicenwDBWOzu': <Queue at 0x7f48b8508b70 maxsize=0>, 'specific.cKYmvBaO!IYbQohwGiVBa': <Queue at 0x7f48b84b20b8 maxsize=0>, 'specific.cKYmvBaO!VdgpilVugaRE'....

This appears to be a receive_buffer in Channels Redis.

Either of these might be normal. Or they might not. Hopefully this helps.

After some more searching I believe both these objects are related. The object with 5000 references may (probably) be a member of the receive_buffer. An asyncio.Queue has a deque on the inside. My guess is something was missed in the logic for RedisChannelLayer.receive(). The del command is not being called.

Another note: These captures are from one process. I have 8 processes running. Each process has 17-20 connections as of now. Only 2 pages in the fairly large application use the Redis Channel layer. I am pretty confident that those queues are not active connections and instead are old connections not being clean up.