distributed: memory leak when using distributed.Client with delayed
I have used dask.delayed to wire together some classes and when using dask.threaded.get everything works properly. When same code is run using distributed.Client memory used by process keeps growing.
Dummy code to reproduce issue is below.
import gc
import os
import psutil
from dask import delayed
# generate random strings: https://stackoverflow.com/a/16310739
class Data():
def __init__(self):
self.tbl = bytes.maketrans(bytearray(range(256)),
bytearray([ord(b'a') + b % 26 for b in range(256)]))
@staticmethod
def split_len(seq, length):
return [seq[i:i + length] for i in range(0, len(seq), length)]
def get_data(self):
l = self.split_len(os.urandom(1000000).translate(self.tbl), 1000)
return l
class Calc():
def __init__(self, l):
self.l = l
def nth_nth_item(self, n):
return self.l[n][n]
class Combiner():
def __init__(self):
self.delayed_data = delayed(Data())
def get_calc(self):
d_l = self.delayed_data.get_data(pure=True)
return delayed(Calc, pure=True)(d_l)
def mem_usage_mb(self):
process = psutil.Process(os.getpid())
return "%.2f" % (process.memory_info().rss * 1e-6)
def results(self):
return {
'0': self.get_calc().nth_nth_item(0),
'1': self.get_calc().nth_nth_item(1),
'2': self.get_calc().nth_nth_item(2),
'mem_usage_mb': self.mem_usage_mb()
}
def delayed_results(self):
return delayed(self.results())
def main_threaded_get():
from dask.threaded import get as threaded_get
from dask import compute
for i in range(300):
delayed_obj = Combiner().delayed_results()
res = compute(delayed_obj, key=threaded_get)[0]
#print(res)
print("#%d, mem: %s mb" % (i, res['mem_usage_mb']))
gc.collect()
def main_distributed_client():
from distributed import Client
client = Client(processes=True, n_workers=1, threads_per_worker=1)
for i in range(1000):
delayed_obj = Combiner().delayed_results()
future = client.compute(delayed_obj)
res = future.result()
print("#%d, mem: %s mb" % (i, res['mem_usage_mb']))
collect_res = client.run(lambda: gc.collect()) # doesn't help
# print(collect_res)
if __name__ == "__main__":
main_threaded_get()
main_distributed_client()
Results:
main_threaded_get():
100, mem: 33.64 mb
200, mem: 33.64 mb
299, mem: 33.64 mb
main_distributed_client()
100, mem: 94.02 mb
200, mem: 96.02 mb
300, mem: 97.95 mb
400, mem: 100.11 mb
500, mem: 102.29 mb
600, mem: 104.48 mb
700, mem: 106.72 mb
800, mem: 108.20 mb
900, mem: 110.02 mb
999, mem: 112.22 mb
And also "distributed.utils_perf - WARNING - full garbage collections took 60% CPU time recently (threshold: 10%)" messages starting with i=30
Python 3.6.5
>>> dask.__version__
'0.18.0'
>>> distributed.__version__
'1.22.0'
About this issue
- Original URL
- State: open
- Created 6 years ago
- Reactions: 2
- Comments: 26 (7 by maintainers)
@Axel-CH I’ve also noticed a mismatch between the memory usage reported by dask distributed and the OS. What helped me to resolve problems of freezed and killed workers was to change the configuration described here to the following:
Code for: mleak.py
Modified script compares memory usage using tracemalloc before computing delayed function and after.
If I’m interpreting tracemalloc results correctly, then it looks that memory usage grows when pickle.loads is called.
Run:
python -X tracemalloc mleak.pyTop memory increases per invocation:
Call stack for distributed/protocol/pickle.py (different invocation)
sorry , I find it’s not memory leak problem in my case. Actually , it seems (personal opinion) the problem of bad control of graph size (I mean number of tasks). If i control the number of tasks submited, the memory is almost at a constant level. Thanks .
I wanted to add some unexpected results I observed and was able to resolve thanks to the dialogue above…
Using Python 3.8.5 Prefect==0.14.16 Dask[complete]==2021.2.0
I was observing errors while trying to run my Prefect workflows on an AWS Stack involving some ECS Fargate containers, which kept saying something along the lines of this:
This was strange, and I definitely had a few other lingering problems that really made my hunt difficult until I saw some logs that stated the following:
Above, @mrocklin suggested “less workers and more memory per worker”, which proved to be my silver bullet in this instance. Hope someone else sees that issue and can react accordingly!
@songqiqqq’s suggestion seems to be a viable workaround – limiting the number of tasks scheduled at any given time.
Changing
to
solved the issue for me.
I continued to get warnings; but tasks were processed
There is also objgraph which is useful for generating reference graphs of objects: https://mg.pov.lt/objgraph/
Interesting. When I run this I get something similar. Memory use climbs slowly in steps. I also get a number of warnings about garbage collection time taking a long while.
I’m curious how people generally debug this sort of issue. I might start with the following:
If anyone has any experience here and has the time to investigate this further I would appreciate it.