tensorflow: "Inconsistent CUDA toolkit path: /usr vs /usr/lib" when running ./configure

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Debian 10
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: No
  • TensorFlow installed from (source or binary): source
  • TensorFlow version: v2.2.0 (2b96f3662bd776e277f86997659e61046b56c315)
  • Python version: 3.7.3
  • Installed using virtualenv? pip? conda?: No
  • Bazel version (if compiling from source): 2.0.0
  • GCC/Compiler version (if compiling from source): 8.3.0
  • CUDA/cuDNN version: 10.1/7.6.5
  • GPU model and memory: GeForce GTX 1070 and 8192 MB

Describe the problem

I receive the error “Inconsistent CUDA toolkit path: /usr vs /usr/lib” when running ./configure. I believe I should not receive the error.

Any other info / logs

Console output:

~/tensorflow % ./configure
You have bazel 2.0.0 installed.
Please specify the location of python. [Default is /usr/bin/python]: /usr/bin/python3


Found possible Python library paths:
  /usr/local/lib/python3.7/dist-packages
  /usr/lib/python3.7/dist-packages
  /usr/lib/python3/dist-packages
Please input the desired Python library path to use.  Default is [/usr/local/lib/python3.7/dist-packages]

Do you wish to build TensorFlow with OpenCL SYCL support? [y/N]: 
No OpenCL SYCL support will be enabled for TensorFlow.

Do you wish to build TensorFlow with ROCm support? [y/N]: 
No ROCm support will be enabled for TensorFlow.

Do you wish to build TensorFlow with CUDA support? [y/N]: y
CUDA support will be enabled for TensorFlow.

Do you wish to build TensorFlow with TensorRT support? [y/N]: 
No TensorRT support will be enabled for TensorFlow.

Inconsistent CUDA toolkit path: /usr vs /usr/libAsking for detailed CUDA configuration... ^C

At the time of writing, the error comes from third_party/gpus/find_cuda_config.py:292. The error occurs because, on my system, cuda_binary_dir evaluates to /usr/bin, while nvvm_library_dir evaluates to /usr/lib/nvidia-cuda-toolkit/libdevice. Although I’m using Debian 10, which isn’t officially supported, this error can also occur in Ubuntu 20.04 if the user installed nvcc via the nvidia-cuda-toolkit package, which installs nvcc in two locations:

  • /usr/bin/nvcc
  • /usr/lib/nvidia-cuda-toolkit/bin/nvcc

The solution I tentatively suggest is to remove the consistency check from find_cuda_config.py because it’s merely a heuristic. As a result, the check might cause ./configure to proceed when it should exit early, or to exit early when it should proceed.


Edit: As pointed out by @tensorfoo and @ambertide, removing the consistency check doesn’t work. A more reliable workaround is to install the cuda toolkit using Nvidia’s .run file installer.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 2
  • Comments: 15 (5 by maintainers)

Most upvoted comments

For whoever might have had the same problem: I was able to compile TensorFlow by uninstalling the nvidia-cuda-toolkit package in my package manager and installing CUDA using Nvidia’s .run file installer. It may have also been possible to use their .deb package, but it required me to downgrade my graphics card driver. AFAIK, there’s no practical reason for Nvidia’s package to require a downgrade; it’s just slightly ham-fisted dependency management on Nvidia’s part.

Although I was able to work around my issue, I don’t consider it properly solved because, seeing as Ubuntu is officially supported by TensorFlow, one would expect it to work with the CUDA drivers available directly from Ubuntu’s software repository.

@mohantym I just gave up.

If you aren’t su, you can install the toolkit into a different folder, where you have write permission.

Would it be possible for you to install NVIDIA’s CUDA Toolkit in a single directory from NVIDIA’s .run package with

$ wget https://developer.nvidia.com/compute/cuda/10.1/Prod/local_installers/cuda_10.1.105_418.39_linux.run
$ sudo sh cuda_10.1.105_418.39_linux.run --silent --toolkit --toolkitpath=/usr/local/cuda

Honestly, this just points on the direction of the unnecessarily hard procedure to install, build or do anything with Tensorflow, or anything ML oriented in general. I have also run into this issue while trying to build 1.14 with 10.1/7.6.5, the path that brought me here was one of frustration, two times I had to reinstall the Nvidia drivers, my resolution fell to 1024 at one point, another time my apt completely broke, twice, who knows maybe I am just frustrated and unable to perform basic tasks, but it seems to me that this entire process is a tad bit harder than it should be.

Thank you for the reply. Unfortunately I’m not an admin on the machine, so I was hoping in some workaround from the TF side. In the meantime I’ll try to contact an admin.