tensorflow: TRT Converter not working in 2.7.0 version of official image

Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu20.04
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device:
  • TensorFlow installed from (source or binary): docker image tensorflow/tensorflow:2.7.0-gpu
  • TensorFlow version (use command below): v2.7.0-rc1-69-gc256c071bb2 2.7.0
  • Python version: Python 3.8.10
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory: Nvidia GeForce 1050

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with:

  1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"
  2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior I tried to optimize a Tensorflow SavedModel using trt_converter. It works in official tensorflow/tensorflow:2.3.0-gpu, tensorflow/tensorflow:2.4.0-gpu yet fails in tensorflow/tensorflow:2.7.0-gpu Describe the expected behavior trt_convert.TrtGraphConverterV2 should work out of box as in previous versions of official builds. Contributing

  • Do you want to contribute a PR? (yes/no):
  • Briefly describe your candidate solution(if contributing):

Standalone code to reproduce the issue Provide a reproducible test case that is the bare minimum necessary to generate the problem. If possible, please share a link to Colab/Jupyter/any notebook.

from tensorflow.python.compiler.tensorrt import trt_convert as trt

input_saved_model_dir = "directory/of/tensorflow/saved_model"
output_saved_model_dir = "tensorrt-test"
converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
trt_graph = converter.convert()
converter.save(output_saved_model_dir);print("====saved====")

Other info / logs Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.

2021-12-23 09:10:04.726658: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such fil
e or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-12-23 09:10:04.726892: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object fil
e: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda/lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
2021-12-23 09:10:04.726909: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:35] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the 
missing libraries mentioned above are installed properly.
ERROR:tensorflow:Tensorflow needs to be built with TensorRT support enabled to allow TF-TRT to operate.
Traceback (most recent call last):
  File "trt.py", line 10, in <module>
    converter = trt.TrtGraphConverterV2(input_saved_model_dir=input_saved_model_dir)
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1009, in __init__
    _check_trt_version_compatibility()
  File "/usr/local/lib/python3.8/dist-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 221, in _check_trt_version_compatibility
    raise RuntimeError("Tensorflow has not been built with TensorRT support.")
RuntimeError: Tensorflow has not been built with TensorRT support.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 28 (10 by maintainers)

Most upvoted comments

This one seems to be still causing headaches…

Boris, Tensorflow 2.7.0 (from pypi) was built with TensorRT 7.2.2 (not 8.x) - it needs libnvinfer.so.7 and libnvinfer_plugin.so.7 but you have libnvinfer8 installed.

TensorRT does not support forward compatibility

To test that Tensorflow can find TensorRT libraries you can do the following:

import tensorflow as tf
import tensorflow.compiler as tf_cc
tf_cc.tf2tensorrt._pywrap_py_utils.is_tensorrt_enabled()

print("loaded trt ver:", tf_cc.tf2tensorrt._pywrap_py_utils.get_loaded_tensorrt_version())
print("linked trt ver:", tf_cc.tf2tensorrt._pywrap_py_utils.get_linked_tensorrt_version())

if you do not have libnvinfer.so.7 and libnvinfer_plugin.so.7 then it will output

>>> cc.tf2tensorrt._pywrap_py_utils.is_tensorrt_enabled()
2022-01-06 05:37:08.526925: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory
2022-01-06 05:37:08.527000: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory
2022-01-06 05:37:08.527013: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:35] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
False

So, either install TRT 7.2.2 (or 7.2.3) or build TF with TRT 8.x support from source.

How to build TF with TRT support from source is described here https://apivovarov.medium.com/run-tensorflow-2-object-detection-models-with-tensorrt-on-jetson-xavier-using-tf-c-api-e34548818ac6