tensorflow: Could not load dynamic library 'libcudart.so.11.0'
Please make sure that this is a build/installation issue. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:build_template
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 20.04
- TensorFlow installed from (source or binary): binary
- TensorFlow version: 2.4.0
- Python version: 3.7.9
- Installed using virtualenv? pip? conda?: pip
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version: libcudnn8_8.0.5.39-1+cuda11.0_amd64.deb (I also installed the dev version)
- NVIDIA Driver Version: 450.80.02
- GPU model and memory: GeForce RTX 2080 Super with Max-Q Design
Describe the problem
Provide the exact sequence of commands / steps that you executed before running into the problem I tried following the instructions as described in: https://www.tensorflow.org/install/gpu
- I installed Ubuntu 20.04 (which installed the NVIDIA driver)
- Installed python using pyenv
- sudo apt install nvidia-cuda-toolkit
- pip install tensorflow
- Installed: libcudnn8_8.0.5.39-1+cuda11.0_amd64.deb
- (saw that it failed to find ‘libcudart.so.11.0’)
- Installed: libcudnn8-dev_8.0.5.39-1+cuda11.0_amd64.deb
- (still failed to find ‘libcudart.so.11.0’)
Is there a way for me to check which part of the installation broke? Any ideas on what I can do to fix this? Thanks!
Any other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
PyDev console: starting.
Python 3.7.9 (default, Dec 22 2020, 21:13:51)
[GCC 9.3.0] on linux
>>> import tensorflow as tf
>>> print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
2020-12-22 22:56:35.044676: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2020-12-22 22:56:35.044691: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
/home/ido/TeraResearch/venv/lib/python3.7/site-packages/pandas/compat/__init__.py:120: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError.
warnings.warn(msg)
2020-12-22 22:56:35.938373: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2020-12-22 22:56:35.938801: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1
2020-12-22 22:56:37.494184: E tensorflow/stream_executor/cuda/cuda_driver.cc:328] failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error
2020-12-22 22:56:37.494273: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:169] retrieving CUDA diagnostic information for host: ido-ml
2020-12-22 22:56:37.494291: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:176] hostname: ido-ml
2020-12-22 22:56:37.494480: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:200] libcuda reported version is: 450.80.2
2020-12-22 22:56:37.494544: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:204] kernel reported version is: 450.80.2
2020-12-22 22:56:37.494561: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:310] kernel version seems to match DSO: 450.80.2
Num GPUs Available: 0
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 98
- Comments: 67 (6 by maintainers)
@ido-tera-group
If you follow the guidelines as mentioned in above link , you should be able to install it. Glad to know you installed.Please, close this thread if your issue was resolved. Thanks!
The fucking stupid bug still exist in the latest tf today.
Still having the same issue here.
Closing as stale. Please reopen if you’d like to work on this further.
I had the same problem, it was solved by installing cudatoolkit
conda install cudatoolkit
I has same problem ,and solved this problem by those step.
First, find out where the “libcudart.so.11.0” is
If you lost other at error stack, you can replace the “libcudart.so.11.0” by your word in below:
Output in my system.This result shows where the “libcudart.so.11.0” is in my system:
If the result shows nothing, please make sure you have install cuda or other staff that must install in your system.
Second, add the path to environment file.
Last, when it all done, you can try your python code again.
Hope it help for you!
At the end
I do every thing on guide list to install tensorflow. But still occurs the same problem my environment: ubuntu: 20.04LTS tensorflow: 2.4.1 python3: 3.8 pip3: 21 CUDA: 11.1 cuDNN: 8.1.0 GPU: GeForce GTX 860M GPU Driver: 460.39
My error stack before solve this problem:
After I fix it , the python code works fine
this helped me:
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
My computer no GPU, why I need cudatoolkit?
I see, thank you! After tinkering a bit, it worked… But I changed the instructions to work for 20.04:
Perhaps the docs should be updated?
Thanks again!
Ubuntu 20.04 doesn’t upgrade to CUDA-11.x. Please follow the instructions at the CUDA download page to install CUDA-11.x.
After all, don’t forget joint the CUDA library path with environment of
LD_LIBRARY_PATH
.echo 'export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/targets/x86_64-linux/lib' >> ~/.bashrc
saw same error:
but it’s harmless, I don’t want to use a GPU.
Can you refer me to clear instructions on how to install? I see a bunch of blogs each with different ideas… Is there an official (or semi-offical) guide that should actually work? Thanks!
A short quick fix that worked for me
Running Ubuntu 20.04, Python 3.8.5, cuDNN 8.1.0, on an RTX 2080 Super.
I downloaded this run file from NVIDIA, then only opted in to install the CUDA 11 toolkit. After it finished downloading, I added
/usr/local/cuda-11.0/bin
to my PATH and/usr/local/cuda-11.0/lib64
to my LD_LIBRARY_PATH.After that, my GPU was functional again.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.
I’m facing the same problem. I installed tensorflow-gpu 2.2.0 using Anaconda and it will automatically download cudnn 7.6 and cudatoolkit 10.1. anyone using anaconda also faces the same problem? ( i installed tensorflow-gpu by running this line
conda create -n tf-gpu tensorflow-gpu==2.2.0 python==3.8
)I celebrated too soon… tensorflow did import without that immediate message, but when I tried building a model, I got:
Any clues?
LONG LIVE THE BUG
This worked for me:
This worked for me as well.
you can try to add these code to load dynamic library
from ctypes import * lib8 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcublas.so.11’) lib1 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcudart.so.11.0’) lib2 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcublasLt.so.11’) lib3 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcufft.so.10’) lib4 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcurand.so.10’) lib5 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcusolver.so.10’) lib6 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcusparse.so.11’) lib7 = cdll.LoadLibrary(‘/data/users/CHDHPC/2017902628/cuda/lib64/libcudnn.so.8’)
Good to know! Another solution that worked for me is to find the correct version of Tensorflow that matches my OS, CUDA version, and python version here: https://www.tensorflow.org/install/source#gpu, and just
pip install
that version of tensorflow.this worked for me
@ido-tera-group
Please, refer the tutorial from here.Thanks!
Hi, in this how do you do Step #2?
etc/profile
file and add to the end?Please run these command. It will solve the issue
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/cuda-ubuntu1804.pin sudo mv cuda-ubuntu1804.pin /etc/apt/preferences.d/cuda-repository-pin-600 sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub sudo add-apt-repository “deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/ /” sudo apt-get update sudo apt-get -y install cuda
What solved the problems for me with cuda12.2 on Ubuntu 22.04 LTS was setting below PATHs according to Nvidia CUDA Installation instructions
Source: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#post-installation-actions
To do this I added the below lines to my ~/.bashrc file
The fix is then running the following commands:
Your file names and locations will vary so please adjust accordingly. As someone suggested above use command below to locate above file locations (if they exist):
It seems recent TensorFlow versions have not updated the names, versions, and locations of these files for more recent CUDA versions so we are just copying the new files to the old location and renaming them to their old names while keeping the recent versions with their original names in their original locations.
Oh, I guess those two packages are now combined into one: https://www.tensorflow.org/install
it’s really annoying to users to show this warning though if they actually don’t want to use the GPU
Edit: Oh, I guess now you can actively install the CPU version only with the new versions of tensorflow:
I guess that may fix my warning.
What the hell, it seriously worked for me! Can’t believe it, what a lol sol.
Thanks for this. What package did you subsequently install?
@ido-tera-group
As per the build configurations from here it should work with this configuration. Can you please create a fresh environment and try to install from scratch and see if the issue still persists.Thanks!
ONE BUG TO RULE THEM ALL
still exists, unbelievable.
I’ve resolved the issue by performing following-
sudo apt install nvidia-cuda-toolkit
and everything is just amazing now:) thanks for the help.
For anyone having this issue (ping @Kamlesh364), I strongly recommend checking out Docker and using the official images: https://hub.docker.com/r/tensorflow/tensorflow/. It’s a new technology to learn, but then saves a lot of time when solving these dependency-hell problems, since the Docker image has all the dependencies (including CUDA) pre-installed. Docker also integrates nicely with VSCode (check out devcontainers).
You just need to install a CUDA-capable driver and enable GPU passthrough to Docker: Linux tutorial, Windows tutorial.
Thanks! This worked for me .
normally don’t post but go to the following link below, do exactly as the instructions say, then reboot your linux box(server) and you should be fine.
https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=WSL-Ubuntu&target_version=2.0&target_type=deb_network
PS, other people wrote the correct steps but in the case that the download URL changes or etc… I have provided the downloads link from NVIDIA