onnxruntime: [Build] Unable to suppress unused variable
Describe the issue
Hi OnnxRuntime team, I’m trying to build OnnxRuntime on Jetson platform.
But keep failing due to unused variable error:
./build.sh --config Release --skip_submodule_sync --parallel --build_shared_lib --build_dir /home/nvidia/triton/onnxruntime/build --update --build --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu --use_tensorrt --tensorrt_home /usr/src/tensorrt --cmake_extra_defines CMAKE_CXX_FLAGS=-Wunused-variable --cmake_extra_defines 'CMAKE_CUDA_ARCHITECTURES=53;62;72;87'
...
...
...
-- Found PythonInterp: /usr/bin/python3 (found version "3.8.10")
Generated: /raid/home/nvidia/triton/onnxruntime/build/Release/_deps/onnx-build/onnx/onnx-ml.proto
Generated: /raid/home/nvidia/triton/onnxruntime/build/Release/_deps/onnx-build/onnx/onnx-operators-ml.proto
Generated: /raid/home/nvidia/triton/onnxruntime/build/Release/_deps/onnx-build/onnx/onnx-data.proto
--
-- ******** Summary ********
-- CMake version : 3.26.3
-- CMake command : /usr/bin/cmake
-- System : Linux
-- C++ compiler : /usr/bin/c++
-- C++ compiler version : 9.4.0
-- CXX flags : -Wunused-variable -ffunction-sections -fdata-sections -DCPUINFO_SUPPORTED -Wnon-virtual-dtor
-- Build type : Release
-- Compile definitions : ORT_ENABLE_STREAM;EIGEN_MPL2_ONLY;_GNU_SOURCE;__STDC_FORMAT_MACROS
-- CMAKE_PREFIX_PATH : /home/nvidia/triton/onnxruntime/build/Release/installed
-- CMAKE_INSTALL_PREFIX : /usr/local
-- CMAKE_MODULE_PATH : /raid/home/nvidia/triton/onnxruntime/cmake/external
--
-- ONNX version : 1.14.0rc1
-- ONNX NAMESPACE : onnx
-- ONNX_USE_LITE_PROTO : OFF
-- USE_PROTOBUF_SHARED_LIBS : OFF
-- Protobuf_USE_STATIC_LIBS : ON
-- ONNX_DISABLE_EXCEPTIONS : OFF
-- ONNX_WERROR : OFF
-- ONNX_BUILD_TESTS : OFF
-- ONNX_BUILD_BENCHMARKS : OFF
--
-- Protobuf compiler :
-- Protobuf includes :
-- Protobuf libraries :
-- BUILD_ONNX_PYTHON : OFF
Finished fetching external dependencies
-- Performing Test HAS_UNUSED_BUT_SET_PARAMETER
-- Performing Test HAS_UNUSED_BUT_SET_PARAMETER - Success
-- Performing Test HAS_UNUSED_BUT_SET_VARIABLE
-- Performing Test HAS_UNUSED_BUT_SET_VARIABLE - Success
-- Performing Test HAS_UNUSED_VARIABLE
-- Performing Test HAS_UNUSED_VARIABLE - Success
...
...
...
[ 56%] Building CXX object CMakeFiles/onnxruntime_providers.dir/raid/home/nvidia/triton/onnxruntime/onnxruntime/core/providers/cpu/math/gemm.cc.o
[ 56%] Building CXX object CMakeFiles/onnxruntime_providers_cuda.dir/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc.o
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc: In instantiation of ‘onnxruntime::common::Status onnxruntime::contrib::cuda::Attention<T>::ComputeInternal(onnxruntime::OpKernelContext*) const [with T = onnxruntime::MLFloat16]’:
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.h:21:10: required from here
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc:105:8: error: unused variable ‘is_mask_1d_key_seq_len_start’ [-Werror=unused-variable]
105 | bool is_mask_1d_key_seq_len_start = parameters.mask_type == AttentionMaskType::MASK_1D_KEY_SEQ_LEN_START;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc: In instantiation of ‘onnxruntime::common::Status onnxruntime::contrib::cuda::Attention<T>::ComputeInternal(onnxruntime::OpKernelContext*) const [with T = float]’:
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.h:21:10: required from here
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc:105:8: error: unused variable ‘is_mask_1d_key_seq_len_start’ [-Werror=unused-variable]
cc1plus: all warnings being treated as errors
make[2]: *** [CMakeFiles/onnxruntime_providers_cuda.dir/build.make:2084: CMakeFiles/onnxruntime_providers_cuda.dir/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
...
...
...
[ 66%] Linking CXX static library libonnxruntime_providers.a
[ 66%] Built target onnxruntime_providers
make: *** [Makefile:166: all] Error 2
Traceback (most recent call last):
File "/home/nvidia/triton/onnxruntime/tools/ci_build/build.py", line 2597, in <module>
sys.exit(main())
File "/home/nvidia/triton/onnxruntime/tools/ci_build/build.py", line 2493, in main
build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
File "/home/nvidia/triton/onnxruntime/tools/ci_build/build.py", line 1432, in build_targets
run_subprocess(cmd_args, env=env)
File "/home/nvidia/triton/onnxruntime/tools/ci_build/build.py", line 779, in run_subprocess
return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
File "/raid/home/nvidia/triton/onnxruntime/tools/python/util/run.py", line 49, in run
completed_process = subprocess.run(
File "/usr/lib/python3.8/subprocess.py", line 516, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/bin/cmake', '--build', '/home/nvidia/triton/onnxruntime/build/Release', '--config', 'Release', '--', '-j4']' returned non-zero exit status 2.
Urgency
We are trying to build it against rel-1.15.0 branch and I would like to confirm it before release date.
Target platform
aarch64
Build script
./build.sh --config Release --skip_submodule_sync --parallel --build_shared_lib --build_dir /home/nvidia/triton/onnxruntime/build --update --build --use_cuda --cuda_home /usr/local/cuda --cudnn_home /usr/lib/aarch64-linux-gnu --use_tensorrt --tensorrt_home /usr/src/tensorrt --cmake_extra_defines CMAKE_CXX_FLAGS=-Wunused-variable --cmake_extra_defines 'CMAKE_CUDA_ARCHITECTURES=53;62;72;87'
Error / output
[ 56%] Building CXX object CMakeFiles/onnxruntime_providers_cuda.dir/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc.o
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc: In instantiation of ‘onnxruntime::common::Status onnxruntime::contrib::cuda::Attention<T>::ComputeInternal(onnxruntime::OpKernelContext*) const [with T = onnxruntime::MLFloat16]’:
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.h:21:10: required from here
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc:105:8: error: unused variable ‘is_mask_1d_key_seq_len_start’ [-Werror=unused-variable]
105 | bool is_mask_1d_key_seq_len_start = parameters.mask_type == AttentionMaskType::MASK_1D_KEY_SEQ_LEN_START;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc: In instantiation of ‘onnxruntime::common::Status onnxruntime::contrib::cuda::Attention<T>::ComputeInternal(onnxruntime::OpKernelContext*) const [with T = float]’:
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.h:21:10: required from here
/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc:105:8: error: unused variable ‘is_mask_1d_key_seq_len_start’ [-Werror=unused-variable]
cc1plus: all warnings being treated as errors
make[2]: *** [CMakeFiles/onnxruntime_providers_cuda.dir/build.make:2084: CMakeFiles/onnxruntime_providers_cuda.dir/raid/home/nvidia/triton/onnxruntime/onnxruntime/contrib_ops/cuda/bert/attention.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
Visual Studio Version
No response
GCC / Compiler Version
No response
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 20 (20 by maintainers)
Commits related to this issue
- fix unused var warning in contrib_ops/cuda/bert/attention.cc (#16010) fix https://github.com/microsoft/onnxruntime/issues/16000 — committed to microsoft/onnxruntime by jywu-msft a year ago
- fix unused var warning in contrib_ops/cuda/bert/attention.cc (#16010) fix https://github.com/microsoft/onnxruntime/issues/16000 — committed to microsoft/onnxruntime by jywu-msft a year ago
- Fix compilation error due to missing ORT_UNUSED_VARIABLE definition See https://github.com/microsoft/onnxruntime/issues/16000#issuecomment-1562265152 for details — committed to traversaro/onnxruntime by traversaro a year ago
I created a PR for a targeted fix https://github.com/microsoft/onnxruntime/pull/16010 (so you wouldn’t need the --compile_no_warning_as_error if the fix works)
I will cherry-pick the change to the release branch when we do a patch release.
Done.
@mc-nv , could you please help verify if the latest rel-1.15.0 branch is good?
I was able able to build the OnnxRuntime on Jetson device using
mainbranch. Could you please bring appropriate changes to1.15.0release.Thank you