onnxruntime_backend: cudnn_home not valid during build
Description I am not able to build the ONNX Backend. I am following the build instructions in the README but the build fails at Step 17.
Triton Information Main Branch for Trition Version 21.02
To Reproduce
I am running DGX OS 5 (Ubuntu 20.04).
cmake -DCMAKE_INSTALL_PREFIX:PATH=pwd/install -DTRITON_BUILD_ONNXRUNTIME_VERSION=1.6.0 -DTRITO N_BUILD_CONTAINER_VERSION=21.02 ..
make install
Output:
Step 17/24 : RUN ./build.sh ${COMMON_BUILD_ARGS} --update --build --use_cuda --cuda_home "/usr/local/cuda" ---> Running in 3360f12bb769 2021-03-18 11:01:00,463 build [ERROR] - cuda_home and cudnn_home paths must be specified and valid. cuda_home='/usr/local/cuda' valid=True. cudnn_home='None' valid=False The command '/bin/sh -c ./build.sh ${COMMON_BUILD_ARGS} --update --build --use_cuda --cuda_home "/usr/local/cuda"' returned a non-zero code: 1 make[2]: *** [CMakeFiles/ort_target.dir/build.make:81: onnxruntime/lib/libonnxruntime.so.1.6.0] Fehler 1 make[1]: *** [CMakeFiles/Makefile2:158: CMakeFiles/ort_target.dir/all] Fehler 2 make: *** [Makefile:149: all] Fehler 2
Expected behavior I expect the build to succed.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 18 (5 by maintainers)
https://github.com/triton-inference-server/onnxruntime_backend/blob/main/tools/gen_ort_dockerfile.py#L93 The build relies on getting CUDNN_VERSION from the base containers here. I checked that the variable is indeed present in the
nvcr.io/nvidia/tritonserver:21.02-py3-mincontainer. Can you share the dockerfile generated by gen_ort_dockerfile.py in your build?The comment by @CoderHam that is liked is the key