scipy: memory usage after running scipy.signal.fftconvolve (possible memory leak?)
Consider the memory usage of the following code after executing each statement:
import numpy
import scipy.signal
import gc
# memory at this point is ~35mb
a = numpy.ones(10**7)
b = numpy.ones(10**7)
# memory at this point is ~187mb
c = scipy.signal.fftconvolve(a, numpy.flipud(b), mode="full")
# memory usage at this point is 645mb
# given that a,b,c take up about 305mb in total, this is much
# larger than expected
# If we delete a,b,c and garbage collect...
del a,b,c
gc.collect()
# ...the memory usage drops to ~340mb which is a drop of 305mb
# (as expected since that is the size of what we deleted)
# but the remaining memory usage is still 340mb
# which is much larger than the starting value of 35 mb
Is this extra several hundred mb of memory usage a bug?
Platform:
- CPython 3.5.1 x64
- Scipy 0.17.0 (Christoph Golke’s package)
- Numpy+MKL 1.11.0rc1 (Cristoph Golke’s package)
- Windows 8.1 x64
About this issue
- Original URL
- State: open
- Created 8 years ago
- Comments: 23 (17 by maintainers)
@rgommers I have confirmed that the cache functions work as required. The docs may need correction on the Cache part and we will need to make a function for the cache destruction.