tensorflow: import tensorflow is very slow
Please make sure that this is an issue related to performance of TensorFlow. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:performance_template
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04.3 LTS
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
- TensorFlow installed from (source or binary): binary
- TensorFlow version (use command below): v2.1.0-rc2-17-ge5bf8de 2.1.0
- Python version: Python 3.6.6
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version: CUDA Version: 10.1 cudnn-10.1
- GPU model and memory: TITAN RTX 24190MiB
You can collect some of this information using our environment capture
script
You can also obtain the TensorFlow version with: 1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"
Describe the current behavior
I’m thinking about using tensorflow 2.x for research. However, every time I import tf, it takes about 5 seconds. import tensorflow
is 10 times slower thanimport torch
.
Describe the expected behavior I was expecting the two to be roughly the same level for a fresh import. I know these two libraries have their own design choices. But I’m just wondering is there any chance to speed up the import statement. As a researcher, I need to constantly debug and 5s is just not good.
Standalone code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/Jupyter/any notebook.
from timeit import default_timer as timer
import os
print('import pytorch')
start = timer()
import torch
end = timer()
print('Elapsed time: ' + str(end - start))
print('import tensorflow')
start = timer()
import tensorflow
end = timer()
print('Elapsed time: ' + str(end - start))
Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
import pytorch
Elapsed time: 0.46338812075555325
import tensorflow
Elapsed time: 4.396180961281061
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 19
- Comments: 34 (9 by maintainers)
Hello guys,
Any updates?
I am using a conda virtual environment.
import tensorflow as tf
takes an absurd amount of time.Note: I am using tensorflow cpu.
What can be done to reduce this import time?
any update on this issue? I have similar issue one my local mac machine:
System information
Thank you. I’m grateful to know that tf teams are making efforts to address this issue. I’ll promote tf in my research projects.
Will mark this (pytorch vs tensorflow import speed parity) as closed as the parity was matched.
At the same time, we do start to look into making TenosrFlow packages faster to import when not all modules are required and only the essential functions are needed, and hope it will bring us a easier-to-use package soon.
Ran the gist again and saw the same result that tensorflow was slightly faster to import than pytorch:
The result had some variations from run to run, but as far as the original issue is concerned, the recent tensorflow version is definitely not much (10x) slower to import than pytorch as reported before and basically the two are on par at package loading. Would suggest to close this ticket unless there is a new slowness issue.
This issue happens all the time in GCP
c2-standard-8
with TF 2.10. The very firstimport tensorflow
is very slow (20+ seconds). The funny thing is, it doesn’t seem to matter where the firstimport tensorflow
happens - it could even happen inside a docker container. But after that very first slow import, any subsequent imports (inside or outside docker container) are fast (1 second or less).Can we get this re-opened?
Can we have this reopened and tracked somehow? I understand why the stalebot was added for such a large repository, but personally I don’t like stale-bot closing and killing a long-wanted job suddenly… 😐 And the ml-butler bot doesn’t distinguish the ‘completed’ and ‘not planned’ modes in Github issues.
It is still very slow. This was second run. Windows 11.
Was able to replicate the issue with TF v2.5 , Yes it takes more time when the program is loaded first time , you might see the difference between first time and second time loading (less than a millisecond) , please find the gist here …Thanks!
getting the same issue, any update?