tensorflow: GeForce 3090 incompatibility with Nightly
System information
- Using a stock example script provided in TensorFlow
- Linux Ubuntu 18.04
- TensorFlow installed from Nightly
- TensorFlow version v1.12.1-45908-g9af48cb079
- Python version 3.8
- CUDA 11.1 cuDNN 8.0.5.39
- GeForce RTX 3090 with 24265MiB
Describe the current behavior Training does not run on the GPU, shows “warning”:
Your CUDA software stack is old. We fallback to the NVIDIA driver for some compilation. Update your CUDA version to get the best performance. The ptxas error was: ptxas fatal : Value 'sm_86' is not defined for option 'gpu-name'
Describe the expected behavior Training should run on the GPU
Standalone code to reproduce the issue Install GeForce 3090 then install nightly GPU:
conda create -n tf-n-gpu python=3.8
conda activate tf-n-gpu
pip install tf-nightly-gpu
pip install matplotlib
pip install IPython
Run: https://www.tensorflow.org/tutorials/generative/pix2pix
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 22 (8 by maintainers)
I have the same problem. RTX 3090 tensorflow-gpu 2.4.0rc3 cuda 11.1.105 (tried 11.0, same error) cudnn 8.0.5.39 It showed
Your CUDA software stack is old. We fallback to the NVIDIA driver for some compilation. Update your CUDA version to get the best performance. The ptxas error was: ptxas fatal : Value 'sm_86' is not defined for option 'gpu-name'for a few hundred times and started training. The training seemed fine though.I got the same error many times while running the notebook. But finally the training is running on the gpu:
I additional put this code at the beginning of the notebook:
see: https://github.com/tensorflow/tensorflow/issues/45635#event-4131192988
GPU: RTX 3060 TI Ubuntu-Server-20.04 Driver Version: 455.45.01 cuda_11.0.3_450.51.06 cudnn-11.0-linux-x64-v8.0.2.39 Tensorflow: 2.4.0
I followed the directions in this article: Install TensorFlow & PyTorch for the RTX 3090, 3080, 3070. So far it works pretty well and allows me to utilize my 3080 with TF 2.3.
Upgrading to tensorflow==2.4.0rc2 fixed this issue for me.
It works with the latest nightly-gpu docker image (https://hub.docker.com/layers/tensorflow/tensorflow/nightly-gpu/images/sha256-f9c8333811c5426be605352f307d665a089343cd699e0b49da128dbde18008ad?context=explore) though.