onnxruntime: [urgent] Cross build failed for IOS with reduced ops

Describe the bug When build onnxruntime with following reduced_build_config, it fails to build.

ai.onnx;1;GlobalAveragePool,Shape,Transpose
ai.onnx;5;Reshape
ai.onnx;6;InstanceNormalization,Relu,Sigmoid,Sqrt
ai.onnx;7;Add,And,Div,Mul,Sub
ai.onnx;8;Expand
ai.onnx;9;Cast,ConstantOfShape,Greater,Less,MatMul,NonZero,Where
ai.onnx;11;AveragePool,Concat,Conv,Equal,Gather,Gemm,Pad,Range,ReduceSum,Resize,Slice,Softmax,Squeeze,TopK,Unsqueeze
ai.onnx;12;ArgMax,Clip,MaxPool

Script to build:

python3 tools/ci_build/github/apple/build_ios_framework.py tools/ci_build/github/apple/default_mobile_ios_framework_build_settings.json --config MinSizeRel --include_ops_by_config ./my_model.basic.required_operators.config

Urgency The project deadline is coming this week, please help!

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): MAC OS 11.3.1
  • ONNX Runtime installed from (source or binary):
  • ONNX Runtime version:
  • Python version: 3.9.6
  • Visual Studio version (if applicable):
  • GCC/Compiler version (if compiling from source): Apple clang version 12.0.5 (clang-1205.0.22.11)
  • CUDA/cuDNN version:
  • GPU model and memory:

To Reproduce Run this script with the above config to build:

python3 tools/ci_build/github/apple/build_ios_framework.py tools/ci_build/github/apple/default_mobile_ios_framework_build_settings.json --config MinSizeRel --include_ops_by_config ./my_model.basic.required_operators.config

Expected behavior BUILD FAILEd

** BUILD FAILED **


The following build commands failed:
        CompileC /Users/yaaan/projects/onnxruntime/build/iOS_framework/intermediates/iphoneos_arm64/MinSizeRel/onnxruntime.build/MinSizeRel-iphoneos/onnxruntime_providers.build/Objects-normal/arm64/cpu_execution_provider.o /Users/yaaan/projects/onnxruntime/onnxruntime/core/providers/cpu/cpu_execution_provider.cc normal arm64 c++ com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
Traceback (most recent call last):
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/build.py", line 2325, in <module>
    sys.exit(main())
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/build.py", line 2246, in main
    build_targets(args, cmake_path, build_dir, configs, num_parallel_jobs, args.target)
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/build.py", line 1175, in build_targets
    run_subprocess(cmd_args, env=env)
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/build.py", line 625, in run_subprocess
    return run(*args, cwd=cwd, capture_stdout=capture_stdout, shell=shell, env=my_env)
  File "/Users/yaaan/projects/onnxruntime/tools/python/util/run.py", line 42, in run
    completed_process = subprocess.run(
  File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/usr/local/bin/cmake', '--build', '/Users/yaaan/projects/onnxruntime/build/iOS_framework/intermediates/iphoneos_arm64/MinSizeRel', '--config', 'MinSizeRel', '--parallel', '16']' returned non-zero exit status 65.
Traceback (most recent call last):
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/github/apple/build_ios_framework.py", line 199, in <module>
    main()
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/github/apple/build_ios_framework.py", line 195, in main
    _build_package(args)
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/github/apple/build_ios_framework.py", line 120, in _build_package
    framework_dir = _build_for_ios_sysroot(
  File "/Users/yaaan/projects/onnxruntime/tools/ci_build/github/apple/build_ios_framework.py", line 61, in _build_for_ios_sysroot
    subprocess.run(build_command, shell=False, check=True, cwd=REPO_DIR)
  File "/usr/local/Cellar/python@3.9/3.9.6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/subprocess.py", line 528, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/Users/yaaan/projects/detectron2/env/bin/python3', '/Users/yaaan/projects/onnxruntime/tools/ci_build/build.py', '--config=MinSizeRel', '--ios', '--parallel', '--use_xcode', '--build_apple_framework', '--minimal_build=extended', '--disable_rtti', '--disable_ml_ops', '--disable_exceptions', '--enable_reduced_operator_type_support', '--use_coreml', '--skip_tests', '--apple_deploy_target=11.0', '--include_ops_by_config=/Users/yaaan/projects/PanopticFCN/panoptic_fcn_rexnet2.0_integrated_pos_with_argmax_sim_ort.basic.required_operators.config', '--ios_sysroot=iphoneos', '--osx_arch=arm64', '--build_dir=/Users/yaaan/projects/onnxruntime/build/iOS_framework/intermediates/iphoneos_arm64']' returned non-zero exit status 1.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 22 (11 by maintainers)

Most upvoted comments

Did a test with the sample code and ORT 1.9 - verified that some logs do show up after updating the options: https://github.com/microsoft/onnxruntime-inference-examples/tree/main/mobile/examples/basic_usage/ios

diff --git a/mobile/examples/basic_usage/ios/OrtBasicUsage/SwiftOrtBasicUsage.swift b/mobile/examples/basic_usage/ios/OrtBasicUsage/SwiftOrtBasicUsage.swift
index aa4a508..ee10c63 100644
--- a/mobile/examples/basic_usage/ios/OrtBasicUsage/SwiftOrtBasicUsage.swift
+++ b/mobile/examples/basic_usage/ios/OrtBasicUsage/SwiftOrtBasicUsage.swift
@@ -38,7 +38,7 @@ func SwiftOrtAdd(_ a: Float, _ b: Float) throws -> Float {
   // First, we create the ORT environment.
   // The environment is required in order to create an ORT session.
   // ORTLoggingLevel.warning should show us only important messages.
-  let env = try ORTEnv(loggingLevel: ORTLoggingLevel.warning)
+  let env = try ORTEnv(loggingLevel: ORTLoggingLevel.verbose)

   // Next, we will create some ORT values for our input tensors. We have two
   // floats, `a` and `b`.
@@ -62,11 +62,17 @@ func SwiftOrtAdd(_ a: Float, _ b: Float) throws -> Float {
     elementType: valueDataType,
     shape: valueShape)

+  let sessionOptions = try ORTSessionOptions()
+  try sessionOptions.setLogSeverityLevel(ORTLoggingLevel.verbose)
+  let coremlEpOptions = ORTCoreMLExecutionProviderOptions()
+  coremlEpOptions.useCPUOnly = true
+  try sessionOptions.appendCoreMLExecutionProvider(with: coremlEpOptions)
+
   // Now, we will create an ORT session to run our model.
   // One can configure session options with a session options object
   // (ORTSessionOptions).
   // We use the default options with sessionOptions: nil.
-  let session = try ORTSession(env: env, modelPath: modelPath, sessionOptions: nil)
+  let session = try ORTSession(env: env, modelPath: modelPath, sessionOptions: sessionOptions)

   // A session provides methods to get the model input and output names.
   // We will confirm that they are what we expect.

Something like this:

2021-09-29 15:50:44.705806-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.705785 [I:onnxruntime:, inference_session.cc:304 ConstructorCommon] Creating and using per session threadpools since use_per_session_threads_ is true
2021-09-29 15:50:44.706191-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.706179 [I:onnxruntime:, bfc_arena.cc:26 BFCArena] Creating BFCArena for CoreML with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-09-29 15:50:44.706449-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.706437 [V:onnxruntime:, bfc_arena.cc:62 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-09-29 15:50:44.706547-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.706537 [I:onnxruntime:, bfc_arena.cc:26 BFCArena] Creating BFCArena for CoreML with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-09-29 15:50:44.706960-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.706947 [V:onnxruntime:, bfc_arena.cc:62 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-09-29 15:50:44.707077-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.707065 [I:onnxruntime:, inference_session.cc:1242 Initialize] Initializing session.
2021-09-29 15:50:44.707174-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.707163 [I:onnxruntime:, inference_session.cc:1279 Initialize] Adding default CPU execution provider.
2021-09-29 15:50:44.707274-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.707263 [I:onnxruntime:, bfc_arena.cc:26 BFCArena] Creating BFCArena for Cpu with following configs: initial_chunk_size_bytes: 1048576 max_dead_bytes_per_chunk: 134217728 initial_growth_chunk_size_bytes: 2097152 memory limit: 18446744073709551615 arena_extend_strategy: 0
2021-09-29 15:50:44.707400-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.707383 [V:onnxruntime:, bfc_arena.cc:62 BFCArena] Creating 21 bins of max chunk size 256 to 268435456
2021-09-29 15:50:44.707997-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.707981 [V:onnxruntime:, helper.cc:116 HasNeuralEngine] Current Apple hardware info: x86_64
2021-09-29 15:50:44.708107-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.708097 [V:onnxruntime:, helper.cc:97 GetSupportedNodes] Operator type: [Add] index: [0] name: [Add] supported: [1]
2021-09-29 15:50:44.708227-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.708215 [I:onnxruntime:, coreml_execution_provider.cc:93 GetCapability] CoreMLExecutionProvider::GetCapability, number of partitions supported by CoreML: 1 number of nodes in the graph: 1 number of nodes supported by CoreML: 1
2021-09-29 15:50:44.708449-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.708439 [V:onnxruntime:, base_op_builder.cc:50 AddToModelBuilder] Operator name: [Add] type: [Add] was added
2021-09-29 15:50:44.735041-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.735030 [V:onnxruntime:, session_state.cc:68 CreateGraphInfo] SaveMLValueNameIndexMapping
2021-09-29 15:50:44.735137-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.735131 [V:onnxruntime:, session_state.cc:114 CreateGraphInfo] Done saving OrtValue mappings.
2021-09-29 15:50:44.735238-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.735219 [I:onnxruntime:, session_state_utils.cc:140 SaveInitializedTensors] Saving initialized tensors.
2021-09-29 15:50:44.735343-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.735333 [I:onnxruntime:, session_state_utils.cc:266 SaveInitializedTensors] Done saving initialized tensors
2021-09-29 15:50:44.735438-0700 OrtBasicUsage[73988:3103471] 2021-09-29 14:50:44.735430 [I:onnxruntime:, inference_session.cc:1447 Initialize] Session successfully initialized.

I don’t see anything incorrect with the code you shared. Perhaps double check what is being built?

Or if you can reproduce this with a simple test model and program it may be easier to debug.

Based on your reduced_build_config, there are some operators which is not yet supported by CoreML execution provider, this will create graph partitioning between ORT CPU execution provider and CoreML execution provider, which will slow down the performance, and will offset the perf gain by using hardware acceleration, for this model, probably using ORT CPU execution provider (without CoreML) will have better performance.

The Objective-C API is packaged as a CocoaPods pod (onnxruntime-mobile-objc). In that configuration, it’s not compiled into ORT but rather depends on the ORT library and C/C++ API (another pod, onnxruntime-mobile-c).

The simplest way to test out the Objective-C API with your custom build would be to start with a release version (e.g. 1.9) and download the release pods, then drop in your updated onnxruntime.xcframework.

You could also build the pods. This script may be useful: https://github.com/microsoft/onnxruntime/blob/c30cc9190a956b9d3a24639681699b8404ad36e7/tools/ci_build/github/apple/objectivec/assemble_objc_pod_package.py And the CI build that creates the packages: https://github.com/microsoft/onnxruntime/blob/c30cc9190a956b9d3a24639681699b8404ad36e7/tools/ci_build/github/azure-pipelines/mac-ios-packaging-pipeline.yml#L58-L105 It currently uploads the package zip archive and points to that from the podspecs, but you could change that to a local path or something else.