tensorflow: TensorFlow 2.10.0 not compatible with TensorRT 8.4.3

Click to expand!

Issue Type

Build/Install

Source

binary

Tensorflow Version

2.10.0

Custom Code

No

OS Platform and Distribution

Ubuntu 22.04 LTS

Mobile device

No response

Python version

3.10.4

Bazel version

No response

GCC/Compiler version

No response

CUDA/cuDNN version

cuda 11.7

GPU model and memory

NVIDIA GeForce RTX 3090

Current Behaviour?

I cannot use TensorRT 8 with the latest version of TensorFlow and CUDA.

Standalone code to reproduce the issue

import tensorflow as tf

Relevant log output

2022-09-13 11:06:57.075736: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-09-13 11:06:57.075769: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-09-13 11:06:57.075772: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Reactions: 1
  • Comments: 57 (10 by maintainers)

Commits related to this issue

Most upvoted comments

So after spending 2 days on this issue… Here’s a solution/workaround to get TF2.10 running with TRT on Ubuntu 20.04 (assuming 22 schould work aswell):

As the Error mentions Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory -> so you need TensorRT 7.X … to get it installed you have to differ some actions from official Step-by-step instructions as in: https://www.tensorflow.org/install/pip

  1. Sys. Req. -> guidelike
  2. install Miniconda/Anaconda -> guidelike
  3. Create a conda environment -> differs a) you need python 3.8 to install nvidia-tensorrt 7.x later on, otherwise pip won’t found this specific version conda create --name tf-py38 python=3.8 b) conda activate tf-py38
  4. GPU setup -> guidelike, follow as is (nvidia-smi,conda install etc.) (4,5.) Install tensorrt -> Extra step pip-install tensorrt as in: https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-723/install-guide/index.html#installing-pip:
    • pip install --upgrade setuptools pip
    • pip install nvidia-pyindex
    • check for desired specific tensorrt-versions -> pip install nvidia-tensorrt== py3.10 -> available from 8.4.0.+ py3.9 -> available from 8.0.+ py3.8 -> available from 7.2.2.+
    • install your favorite version e.g. -> pip install nvidia-tensorrt==7.2.3.4
    • verify -> python3 -c "import tensorrt; print(tensorrt.__version__); assert tensorrt.Builder(tensorrt.Logger())"
    • Configure the system paths once again as before to contain tensorrt path: export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/tensorrt/ or with recommended automation: echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/tensorrt/' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
  5. install Tensorflow and follow remaining steps

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))" -> no more missing ‘libnvinfer.so.7’ and ‘libnvinfer_plugin.so.7’

Hope it helps… Best Regards DiDaMain

Welcome 😄. Now how do I close this issue?

Probably yes.

I have the same issue with TF 2.11.0 and TensorRT 8.5.1-1+cuda11.8.

Same issue here with tensorflow 2.11.0 and TensorRT 8.5.1.7

I solve this problem by downgrading the tf to 2.9.2

@sachinprasadhs , tf-nightly-2.12.0.dev20230114, Ubuntu 22.04.1 LTS, Python 3.10.8, miniconda - the issue persists.

$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2023-01-14 22:45:48.016946: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-14 22:45:48.114866: E tensorflow/tsl/lib/monitoring/collection_registry.cc:81] Cannot register 2 metrics with the same name: /tensorflow/core/bfc_allocator_delay
2023-01-14 22:45:48.483636: W tensorflow/tsl/platform/default/dso_loader.cc:66] Could not load dynamic library 'libnvinfer.so.8'; dlerror: libnvinfer.so.8: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/max/miniconda3/lib/:/lib/:/home/max/miniconda3/envs/project/lib/
2023-01-14 22:45:48.483677: W tensorflow/tsl/platform/default/dso_loader.cc:66] Could not load dynamic library 'libnvinfer_plugin.so.8'; dlerror: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/max/miniconda3/lib/:/lib/:/home/max/miniconda3/envs/project/lib/
2023-01-14 22:45:48.483684: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-01-14 22:45:49.083557: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:997] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-01-14 22:45:49.086483: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:997] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-01-14 22:45:49.086598: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:997] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

ubuntu 22.04 cuda 12.0 tensorrt 8.5.0 cudnn 8.7.0 tf-nightly the error still persists

I’m waiting for the stable release of 2.12.0, there will be TensorRT 8 working.

Until that, I’m using TF 2.9.

TF 2.10 and 2.11 are terrible releases.

I have the same issue with tensorflow 2.11.0.

$ python3 
Python 3.10.6 (main, Nov  2 2022, 18:53:38) [GCC 11.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
2022-11-25 10:46:42.560645: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-11-25 10:46:43.462814: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /tools/torch_git/install/lib:/tools/torch_git/install/lib:/usr/local/lib:/usr/local/cuda/lib64
2022-11-25 10:46:43.462910: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /tools/torch_git/install/lib:/tools/torch_git/install/lib:/usr/local/lib:/usr/local/cuda/lib64
2022-11-25 10:46:43.462921: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.

on Ubuntu 22.04, python 3.10.4, Nvidia driver 515.65.01, cuda 11.2.2, cudnn 8.1.1.33

I tried the workaround proposed by @didamain but the only tensorrt versions displayed by pip install nvidia-tensorrt== are 0.0.1.dev4, 0.0.1.dev5, 8.4.0.6, 8.4.1.5, 8.4.2.4, 8.4.3.1

I have no issue when installing tensorflow 2.9.3.

@mshavliuk , It will be part of Tensorflow 2.12, till then you can use nightly version.

tf-nightly 2.12.0.dev20221202 fixed the issue for me.

Works fine with TensorRT 8.5.1-1+cuda11.8.

Hi, After following @didamain detailed instructions (tks for that!), I was able to pass the error about libinfer7 not found. But I still have a error about cublas not been found (I know this was not the original issue, but I think it’s related to Ubuntu 22.04 + CUDA 11.7, etc):

(tf-py38) mario@wstyarproj41:~$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2022-09-16 11:46:14.038116: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-16 11:46:14.146662: E tensorflow/stream_executor/cuda/cuda_blas.cc:2981] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
...
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

Any hints on this? Could not find a cublas python package to install. Regards

cudnn 8.8 is out for cuda 12.0. Any news for tensorrt

I know, it’s only optional but I want to use Nvidia’s TensorRT capabilities. Thanks.

How is it going?

https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/rel-22-12.html#rel-22-12

@sachinprasadhs , tf-nightly-2.12.0.dev20230114, Ubuntu 22.04.1 LTS, Python 3.10.8, miniconda - the issue persists.

$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
2023-01-14 22:45:48.016946: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-01-14 22:45:48.114866: E tensorflow/tsl/lib/monitoring/collection_registry.cc:81] Cannot register 2 metrics with the same name: /tensorflow/core/bfc_allocator_delay
2023-01-14 22:45:48.483636: W tensorflow/tsl/platform/default/dso_loader.cc:66] Could not load dynamic library 'libnvinfer.so.8'; dlerror: libnvinfer.so.8: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/max/miniconda3/lib/:/lib/:/home/max/miniconda3/envs/project/lib/
2023-01-14 22:45:48.483677: W tensorflow/tsl/platform/default/dso_loader.cc:66] Could not load dynamic library 'libnvinfer_plugin.so.8'; dlerror: libnvinfer_plugin.so.8: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /home/max/miniconda3/lib/:/lib/:/home/max/miniconda3/envs/project/lib/
2023-01-14 22:45:48.483684: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
2023-01-14 22:45:49.083557: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:997] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-01-14 22:45:49.086483: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:997] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
2023-01-14 22:45:49.086598: I tensorflow/compiler/xla/stream_executor/cuda/cuda_gpu_executor.cc:997] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero. See more at https://github.com/torvalds/linux/blob/v6.0/Documentation/ABI/testing/sysfs-bus-pci#L344-L355
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

@learning-to-play I believe it is closed by mistake - the issue still persists

@jthibaut what did you use to install tesnorrt? In case you used @didamain workaround, you need to do this:

Run this from terminal with your environment activated, this will add all the libraries installed by tesnorrt to path as they do not get added automatically, you can also swap out the python3.8 with your python vresion:


echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/nvidia/cublas/lib/

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/nvidia/cuda_runtime/lib/

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/nvidia/cuda_nvrtc/lib/

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/nvidia/cudnn/lib/

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/nvidia/tensorrt/'> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

If you used pip install tensorrt, the paths may be a little different:


echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.8/site-packages/tensorrt/'> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

basically you can go check in the site-packages folder where tensorrt is and add it to path accordingly.

on Ubuntu 22.04, python 3.10.4, Nvidia driver 515.65.01, cuda 11.2.2, cudnn 8.1.1.33

I tried the workaround proposed by @didamain but the only tensorrt versions displayed by pip install nvidia-tensorrt== are 0.0.1.dev4, 0.0.1.dev5, 8.4.0.6, 8.4.1.5, 8.4.2.4, 8.4.3.1

I have no issue when installing tensorflow 2.9.3.

pip install nvidia-pyindex (NVIDIA’s version of PyPi) will give you the necessary nvidia-tensorrt versions. But this doesn’t solve the TensorRT 8 not found issue. At least not for me. (I’m not using Conda to manage environments.)

Extremely annoying I have to downgrade to RT7 and Py38 (or completely mess with $LD_LIBRARY_PATH) just to keep Cuda bindings. Could you imagine developing a program, and telling the end-user they must use Python 3.8 and only 3.8? I’m not moving to nightly either. There’s a reason why many of us don’t use nightly builds or alpha releases in production environments.

This should “just work” using pip install *, poetry add *, etc. And it doesn’t.

And why is this issue closed anyway? It’s not solved. Not in v2.10. And not in v2.11. Not in the official release versions, anyway. Until it’s back-ported/cherry picked, and available via PyPi, this issue shouldn’t have been closed.

Same issue as earlier commenters with tensorflow 2.10.0 and @pberndt with tf-nightly-2.11.0.dev20221003 although the cublas error is gone as reported in earlier comments.

Downgraded to 2.9.0 and I’m not seeing any libnvinfer.so.7 warnings. It was either that or downgrading to Python 3.8 to install TensorRT 7 since it doesn’t seem to support later python versions.

The only related topics I can find are about TensorRT 8 breaking compilation on some earlier versions of Tensorflow but unclear if that has any relation to this.

OS: Ubuntu 22.04 Python: 3.10.6 CUDA: 11.7 TensorRT: 8.4.3

To note: TensorRT was installed via pip since there is no .deb package for Ubuntu 22.04 at the moment and python3-libnvinfer-dev package installation would fail since it expects Python < 3.9 Also, TensorRT is compiled for CUDA 11.6 and not 11.7 - although Nvidia reports it as compatible in their documentation

Best regards

Hi, any progress on this one?

Latest nightly still appears to be looking for libnvinfer.so.7 :

Successfully installed tf-nightly-2.11.0.dev20221003

2022-10-03 09:54:22.740813: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-10-03 09:54:45.914148: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory

Kind regards

using python 3.10, tf 2.12:

pip install nvidia-tensorrt
Looking in indexes: https://pypi.org/simple, https://pypi.ngc.nvidia.com
Requirement already satisfied: nvidia-tensorrt in ./miniconda3/lib/python3.10/site-packages (99.0.0)
Requirement already satisfied: tensorrt in ./miniconda3/lib/python3.10/site-packages (from nvidia-tensorrt) (8.6.1)

but

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
**2023-07-28 12:08:22.549302: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT**
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:1', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:2', device_type='GPU'), PhysicalDevice(name='/physical_device:GPU:3', device_type='GPU')]

Tried:

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/python3.10/site-packages/tensorrt/

No success:

python3 -c "import tensorrt; print(tensorrt.__version__); assert tensorrt.Builder(tensorrt.Logger())"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/home/h21/luas6629/miniconda3/lib/python3.10/site-packages/tensorrt/__init__.py", line 18, in <module>
    from tensorrt_bindings import *
ModuleNotFoundError: No module named 'tensorrt_bindings'


About drivers in cuda:


nvidia-smi
Fri Jul 28 12:10:16 2023       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.116.04   Driver Version: 525.116.04   CUDA Version: 12.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  NVIDIA GeForce ...  On   | 00000000:03:00.0 Off |                  N/A |
| 28%   30C    P8    15W / 250W |      1MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  NVIDIA GeForce ...  On   | 00000000:21:00.0 Off |                  N/A |
| 28%   27C    P8    15W / 250W |      1MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  NVIDIA GeForce ...  On   | 00000000:41:00.0 Off |                  N/A |
| 28%   29C    P8     5W / 250W |      1MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  NVIDIA GeForce ...  On   | 00000000:61:00.0 Off |                  N/A |
| 28%   28C    P8    20W / 250W |      1MiB / 11264MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

work for me. python 3.10 with tensorrt 8+

cd /usr/lib/x86_64-linux-gnu
ln -s libnvinfer.so.8 libnvinfer.so.7
ln -s libnvinfer_plugin.so.8 libnvinfer_plugin.so.7
ln -s libnvonnxparser.so.8 libnvonnxparser.so.7
ln -s libnvparsers.so.8 libnvparsers.so.7

Also can confirm, 2.9.3 works perfectly fine,

$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

@jthibaut this fix by @didamain works perfectly. Scroll up and you’ll see his comment with detailed instructions. Try this only if tf-nightly doesn’t work for you. Good luck.

I know, it’s only optional but I want to use Nvidia’s TensorRT capabilities.

Thanks.

I see the same cuBLAS issue too. Any progress with TensorRT or cuBLAS issues?

Thanks for your time.