onnxruntime: Jetson Device | subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '/workspace/onnxruntime/build/Linux/Release', '--config', 'Release', '--', '-j6']' returned non-zero exit status 2.

Describe the bug Hi all,

I was trying to build the onnxruntime from scoure on the Jetson devices, and I followed the steps from here: https://onnxruntime.ai/docs/how-to/build/eps.html#nvidia-jetson-tx1tx2nanoxavier

I can successfully build the whl file by this command:

git clone --recursive https://github.com/microsoft/onnxruntime
export CUDACXX="/usr/local/cuda/bin/nvcc"
export PATH="/usr/local/cuda/bin:${PATH}"
sudo apt install -y --no-install-recommends \
   build-essential software-properties-common libopenblas-dev \
   libpython3.8-dev python3-pip python3-dev python3-setuptools python3-wheel

./build.sh --config Release --update --build --parallel --build_wheel \
 --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu

However, when I used it in python script, it encountered an issue.

Traceback (most recent call last):
  File "infererence_centernet_onnxruntime.py", line 10, in <module>
    sess = ort.InferenceSession(onnxpath)
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 324, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/usr/local/lib/python3.8/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 362, in _create_inference_session
    sess.initialize_session(providers, provider_options, disabled_optimizers)
RuntimeError: /workspace/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:122 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] /workspace/onnxruntime/onnxruntime/core/providers/cuda/cuda_call.cc:116 bool onnxruntime::CudaCall(ERRTYPE, const char*, const char*, ERRTYPE, const char*) [with ERRTYPE = cudaError; bool THRW = true] CUDA failure 35: CUDA driver version is insufficient for CUDA runtime version ; GPU=127 ; hostname=043f3cc5e143 ; expr=cudaSetDevice(info_.device_id); 

Hence, I was trying to build the onnxruntime with TensorRT.

Built from source by this command:

git clone --recursive https://github.com/microsoft/onnxruntime
export CUDACXX="/usr/local/cuda/bin/nvcc"
export PATH="/usr/local/cuda/bin:${PATH}"
sudo apt install -y --no-install-recommends \
   build-essential software-properties-common libopenblas-dev \
   libpython3.8-dev python3-pip python3-dev python3-setuptools python3-wheel

./build.sh --config Release --update --build --parallel --build_wheel \
 --use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu \
 --tensorrt_home /usr/lib/aarch64-linux-gnu

Error message during building stage:

[ 49%] Linking CXX static library libonnxruntime_optimizer.a
[ 49%] Built target onnxruntime_optimizer
make: *** [Makefile:166: all] Error 2
Traceback (most recent call last):
  File "/workspace/onnxruntime/tools/ci_build/build.py", line 2299, in <module>
    sys.exit(main())
  File "/workspace/onnxruntime/tools/ci_build/build.py", line 2220, in main
    build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
  File "/workspace/onnxruntime/tools/ci_build/build.py", line 1136, in build_targets
    run_subprocess(cmd_args, env=env)
  File "/workspace/onnxruntime/tools/ci_build/build.py", line 612, in run_subprocess
    return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
  File "/workspace/onnxruntime/tools/python/util/run.py", line 42, in run
    completed_process = subprocess.run(
  File "/usr/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '/workspace/onnxruntime/build/Linux/Release', '--config', 'Release', '--', '-j6']' returned non-zero exit status 2.
root@39246b46fb36:/workspace/onnxruntime# l
bash: l: command not found

System information

  • OS Platform and Distribution : Linux 20.04 based on JetPack4.4.1
  • ONNX Runtime installed from (source or binary): Source
  • ONNX Runtime version: 1.8 (should be)
  • Python version: Python3.8
  • GCC/Compiler version (if compiling from source): g++ (Ubuntu/Linaro 8.4.0-3ubuntu2) 8.4.0
  • CUDA/cuDNN version: 10.2
  • Device : AGX Xavier
  • Cmake version : 3.21.1

Goal

To build the whl file for python3.8

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 19 (11 by maintainers)

Most upvoted comments

thanks. I suspected something wrong with your jetson/jetpack installation on your first device.

When you see “CUDA driver version is insufficient for CUDA runtime version”, please:

  1. update your GPU driver to the latest.
  2. reboot. Must reboot!!!

Then try it again.

Often it happened when the system was updated but haven’t been rebooted.

if you are building from source, don’t modify CMakeLists.txt , add additional build option --cmake_extra_defines CMAKE_CUDA_ARCHITECTURES=‘72’

Yes, that’s what I was looking for.

like this:

./build.sh --config Release --update --build --parallel --build_wheel   --use_tensorrt --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu   --tensorrt_home /usr/lib/aarch64-linux-gnu >& 1.log 
grep -C3 'error:' 1.log