channels: memory leak for simple http requests
I started testing my django app with channels and I’m running into memory issues for normal requests.
To demonstrate the issue, I deployed this app to heroku: https://github.com/jacobian/channels-example
And then wrote a simple script to constantly request the home page:
import urllib2
def get():
url = "https://channels-demo.herokuapp.com"
response = urllib2.urlopen(url)
html = response.read()
for i in range(0,30000):
get()
if (i%10 == 0):
print i
From heroku’s memory metrics, you can see that the memory is slowly creeping up. For this simple example, the memory went from 35 to 90 MB in 90 minutes.
I am experiencing this on my own app using channels version 0.14.0.
About this issue
- Original URL
- State: closed
- Created 8 years ago
- Reactions: 1
- Comments: 19 (11 by maintainers)
Further further investigation revealed https://twistedmatrix.com/trac/ticket/8164, which shows that this occurs if you do not initialise Twisted’s global log; it builds up in memory up to around 400MB. https://github.com/andrewgodwin/daphne/commit/da40761b95118da82306974e12843566a90ae433 fixes this issue.
I’m not an expert but probably is not totally a fault of
channels
: every process that wants to allocate memory in the heap cannot return back to the underlying OS also in the case afree()
call is done because of the way the memory allocation works.This is why (I think) several
WSGI
implementation have options used to respawn the process in order to decrease the memory usage (uwsgi
has max-requests,mod-wsgi
has maximum-requests).I think
runworker
should implement an option like that, I could try to write something if I find some time at work since otherwise I’m stuck withrunworker
killed with anOOM
after three requests that upload large images; the same happened in the same project withmod-wsgi
withoutmaximum-requests
.I don’t know if this option should be implemented for
runworker
ordaphne
or both.