tfjs: Operations with variable tensor sizes cause GPU Memory leaks
TensorFlow.js version
- tfjs-core 0.11.9
- tfjs-core 0.12.10
Browser version
- chrome 67.0.3396.99 (64-Bit)
- firefox 61.0.1 (64-Bit)
Describe the problem or feature request
Running operations with variable input tensor sizes causes GPU memory leaks (not tracked by tf.memory stats, but can be tracked using chrome task manager for example):
for (let i = 0; i < iterations; i++) {
const height = Math.floor(Math.random() * maxTensorSize)
const width = Math.floor(Math.random() * maxTensorSize)
console.log(height, width)
const t1 = tf.ones([height, width])
const t2 = tf.ones([height, width])
// do something
const sum = t1.add(t2)
t1.dispose()
t2.dispose()
sum.dispose()
await tf.nextFrame()
console.log(tf.memory())
}
Code to reproduce the bug / link to feature request
https://github.com/justadudewhohacks/tfjs-tensor-size-memoryleak-issue
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Comments: 15 (9 by maintainers)
In case someone is facing the same issue, when training an image classifier or an object detector, you can mitigate that issue by resizing your images to a fixed input size, before calling
tf.fromPixelsand instead of doing tensor operations for padding and resizing: