docker-python: [BUG] Exception: Your CUDA environment is invalid.
Recently I got this error while trying to install cupy on a Kaggle GPU Kernel:
Collecting cupy
Downloading cupy-7.4.0.tar.gz (3.7 MB)
|████████████████████████████████| 3.7 MB 3.4 MB/s eta 0:00:01
ERROR: Command errored out with exit status 1:
command: /opt/conda/bin/python3.7 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-jzxcfeco/cupy/setup.py'"'"'; __file__='"'"'/tmp/pip-install-jzxcfeco/cupy/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-jzxcfeco/cupy/pip-egg-info
cwd: /tmp/pip-install-jzxcfeco/cupy/
Complete output (48 lines):
Options: {'package_name': 'cupy', 'long_description': None, 'wheel_libs': [], 'wheel_includes': [], 'no_rpath': False, 'profile': False, 'linetrace': False, 'annotate': False, 'no_cuda': False, 'use_hip': False}
-------- Configuring Module: cuda --------
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
/opt/conda/compiler_compat/ld: /usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/libcuda.so: file not recognized: file truncated
collect2: error: ld returned 1 exit status
Cannot build a stub file.
Original error: command 'g++' failed with exit status 1
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-jzxcfeco/cupy/setup.py", line 129, in <module>
ext_modules = cupy_setup_build.get_ext_modules()
File "/tmp/pip-install-jzxcfeco/cupy/cupy_setup_build.py", line 744, in get_ext_modules
extensions = make_extensions(arg_options, compiler, use_cython)
File "/tmp/pip-install-jzxcfeco/cupy/cupy_setup_build.py", line 492, in make_extensions
raise Exception('Your CUDA environment is invalid. '
Exception: Your CUDA environment is invalid. Please check above error log.
************************************************************
* CuPy Configuration Summary *
************************************************************
Build Environment:
Include directories: ['/usr/local/cuda/include']
Library directories: ['/usr/local/cuda/lib64']
nvcc command : ['/usr/local/cuda/bin/nvcc']
Environment Variables:
CFLAGS : (none)
LDFLAGS : (none)
LIBRARY_PATH : (none)
CUDA_PATH : (none)
NVTOOLSEXT_PATH : (none)
NVCC : (none)
ROCM_HOME : (none)
Modules:
cuda : No
-> Cannot link libraries: ['cublas', 'cuda', 'cudart', 'cufft', 'curand', 'cusparse', 'nvrtc']
-> Check your LDFLAGS environment variable.
ERROR: CUDA could not be found on your system.
Please refer to the Installation Guide for details:
https://docs-cupy.chainer.org/en/stable/install.html
************************************************************
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 21 (12 by maintainers)
Commits related to this issue
- Use consistent conda channels list. See: https://github.com/Kaggle/docker-python/issues/791#issuecomment-632223272 — committed to Kaggle/docker-python by rosbo 4 years ago
About pre-installing Rapids packages, I tried in December 2019 but hit that issue: https://github.com/Kaggle/docker-python/issues/594#issuecomment-563498314
Let me check whether this dependency conflict issue has been fixed.
Btw now there are many people using Rapids on Kaggle relying on our manually uploaded installation dataset: https://www.kaggle.com/cdeotte/rapids/kernels?sortBy=voteCount&group=everyone&pageSize=20&datasetId=492658 It would be really nice if it can be preinstalled on Kaggle’s GPU Docker Image. https://rapids.ai/start.html