tensorflow: TF-TRT Warning: Could not find TensorRT

Issue type

Build/Install

Have you reproduced the bug with TensorFlow Nightly?

Yes

Source

source

TensorFlow version

tf2.12, tf2.13

Custom code

Yes

OS platform and distribution

Linux Ubuntu 20.04

Mobile device

No response

Python version

3.10, 3.11

Bazel version

No response

GCC/compiler version

9.40

CUDA/cuDNN version

CUDA 11.8, cuDNN8.6

GPU model and memory

RTX2060

Current behavior?

import tensorflow as tf2023-08-03 17:42:07.337886: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations. To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags. 2023-08-03 17:42:07.926267: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT

Standalone code to reproduce the issue

$ conda create --name tf python=3.10
$ conda activate tf

$ conda install -c conda-forge cudatoolkit=11.8.0
$ pip install nvidia-cudnn-cu11==8.6.0.163


mkdir -p $CONDA_PREFIX/etc/conda/activate.d
echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/:$CUDNN_PATH/lib:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

Relevant log output

I have already installed cudatoolkit 11.8 written in the above commands. But It shows CUDA Toolkit is not installed while giving the command. 

$ nvcc --version
CUDA Toolkit is not installed.

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Reactions: 1
  • Comments: 55

Most upvoted comments

Try adding this command

!pip install tensorflow-gpu==2.8.0

@skonto you’re right, 2.15-post1 is looking for libnvinfer_plugin.so.8.6.1 and libnvinfer.so.8.6.1

absl::StatusOr<void*> GetNvInferDsoHandle() {
#if defined(PLATFORM_WINDOWS)
  return GetDsoHandle("nvinfer", "");
#else
  return GetDsoHandle("nvinfer", GetTensorRTVersion());
#endif
}

absl::StatusOr<void*> GetNvInferPluginDsoHandle() {
#if defined(PLATFORM_WINDOWS)
  return GetDsoHandle("nvinfer_plugin", "");
#else
  return GetDsoHandle("nvinfer_plugin", GetTensorRTVersion());
#endif
}

You can know which file your tf is finding by using strace -e open,openat python -c "import tensorflow as tf" in your venv.

I recall that in before, tf2.10 and below looks for libnvinfer.so.7 while the tensorrt python prebuild package only offers libnvinfer.so.8, so you do stupid things ln -s libnvinfer.so.8 libnvinfer.so.7 to get it working,

also the file used to be in tensorrt folder, but now, the tensorrt package is not included in pip install tensorflow[and-cuda] (and-cuda package). Thus you need to install tensorrt, too. The tensorrt package on pypi comes with 8.6.1. However, the package only gives libnvinfer.so.8 where tensorflow is looking for libnvinfer.8.6.1.

Thus the way to solve this is to go to your venv site-packages folder, find tensorrt_libs folder,

(in my case tf version 2.15-post1)

ln -s libnvinfer_plugin.so.8 libnvinfer_plugin.so.8.6.1

ln -s libnvinfer.so.8 libnvinfer.so.8.6.1

and make sure tensorrt_libs folder is in the LD_LIBRARY_PATH

(maybe TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0 offers libnvinfer.so.8.6.1, but it works for me though)

Hi @mikechen66 ,

I am assuming you are using prebuilt binaries from Pypi.If you are building from source please confirm.

Could you please confirm whether CUDNN PATH setting done like below. Refer step 4 in attached documentation source for more details on configuring GPU.

CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))
export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$CONDA_PREFIX/lib/:$LD_LIBRARY_PATH

Also please run the command nvidia-smi to check whether Nvidia driver installed or not.

Please ignore TensorRT warning as this is optional and it won’t affect GPU. Thanks!

It’s quite difficult to manage everything with all those changes during time, but a logical assumption is that tensorflow[and-cuda] will remain to be a preferable setup in the future. So, for the most recent tensorflow and CUDA:

pip install tensorflow[and-cuda]==v2.16.0-rc0

python3 -c "import tensorflow.compiler as tf_cc; \
print(tf_cc.tf2tensorrt._pywrap_py_utils.get_linked_tensorrt_version())"

(8, 6, 1)

Download related tar.gz https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/tars/TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz , extract it and set its lib to the environment variable LD_LIBRARY_PATH:

export CUDNN_PATH=$HOME/venv/lib/python3.11/site-packages/nvidia/cudnn
export LD_LIBRARY_PATH=$CUDNN_PATH/lib:$HOME/repos/TensorRT-8.6.1.6/lib:$LD_LIBRARY_PATH
export TF_ENABLE_ONEDNN_OPTS=0

Try adding this command

!pip install tensorflow-gpu==2.8.0

it’s worked thanks

Ubuntu 22.04 with Driver Version: 535.161.08 CUDA Version: 12.2 Fixed by https://docs.nvidia.com/deeplearning/tensorrt/install-guide/index.html#installing-tar I install with wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/secure/8.6.1/tars/TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0.tar.gz, and export LD_LIBRARY_PATH=/data/TensorRT-8.6.1.6/lib:$LD_LIBRARY_PATH (remind replace TensorRT Path for you)

@zeke-john

Why not try this: strace -e open,openat python -c "import tensorflow as tf" 2>&1 | grep "libnvinfer\|TF-TRT" This would tell you what file tensorflow is looking for, and just find the file either from the targz package or tensorrt package on pypi, then add the folder into your LD_LIBRARY_PATH and softlink the file if necessary.

Thank you for mentioning this technique, I am using TensorFlow 2.16.1(latest) and faced the same problem. When I used the above tool, I found that libcudnn.so.8 was missing from tensorrt_libs. Hence, installed CUDNN. Also soft linked the so.8 files to 8.6.1. Put CUDA lib path, CUDNN lib path and tensorrt_libs location in LD_LIBRARY_PATH environment variable. It began to work after this.

I am attaching TensorRT Tensorflow compatibility matrix hoping it might be useful;

TensorFlow version TensorRT version
1.7 3.0.4
1.11 4
1.13 5
2.1 6
2.10 7.2.2
2.16.1(latest) 8.6.1(latest stable)

The following snippet resolved the tensorrt problem completely on my machine.

mkdir -p $CONDA_PREFIX/etc/conda/activate.d

echo 'CUDNN_PATH=$(dirname $(python -c "import nvidia.cudnn;print(nvidia.cudnn.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'TENSORRT_PATH=$(dirname $(python -c "import tensorrt;print(tensorrt.__file__)"))' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh
echo 'export LD_LIBRARY_PATH=$CONDA_PREFIX/lib/:$CUDNN_PATH/lib:$LD_LIBRARY_PATH' >> $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

source $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

I would like to thank ipaleka, KardRi , TXH2020 and other contributors. After installingTensortRT 8.6.1.6, I solve the issue of “TF-TRT Warning: Could not find TensorRT”.

Installing as same as the above-mentioned except TensortRT 8.6.1.6

Exporting

export LD_LIBRARY_PATH=/home/user/tf_tensorrt/TensorRT-8.6.1.6/lib:$LD_LIBRARY_PATH

print tf version

import tensorflow as tf 
print(tf.__version__)

2.16.1

print tensorrt version

import tensorrt as trt
print(trt.__version__)

8.6.1

However, after running the following script I have a new issue Successful NUMA node read from SysFS had negative value (-1).

import tensorflow as tf
print(tf.config.list_physical_devices('GPU'))

…Successful NUMA node read from SysFS had negative value (-1),… [PhysicalDevice(name=‘/physical_device:GPU:0’, device_type=‘GPU’)

There are temporary solutions as follows.

https://stackoverflow.com/questions/44232898/memoryerror-in-tensorflow-and-successful-numa-node-read-from-sysfs-had-negativ

https://gist.github.com/zrruziev/b93e1292bf2ee39284f834ec7397ee9f

@zeke-john

Why not try this: strace -e open,openat python -c "import tensorflow as tf" 2>&1 | grep "libnvinfer\|TF-TRT" This would tell you what file tensorflow is looking for, and just find the file either from the targz package or tensorrt package on pypi, then add the folder into your LD_LIBRARY_PATH and softlink the file if necessary.

Where do I need to copy these 2 files to? for ex. TensorRT-8.6.1.6.Linux.x86_64-gnu.cuda-12.0 fyii. I am on Ubuntu 22.04 on WSL2 on Windows 11