temporal-shift-module: Error when running online demo main.py

I’m getting this error when I try to execute python3 main.py within ~/temporal-shift-module/online_demo folder…

Open camera...
<VideoCapture 0x7f2f0b2370>
Build transformer...
/usr/local/lib/python3.6/dist-packages/torchvision-0.5.0a0+85b8fbf-py3.6-linux-aarch64.egg/torchvision/transforms/transforms.py:220: UserWarning: The use of the transforms.Scale transform is deprecated, please use transforms.Resize instead.
  "please use transforms.Resize instead.")
Build Executor...
/home/bm/temporal-shift-module/online_demo/mobilenet_v2_tsm.py:95: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  x1, x2 = x[:, : c // 8], x[:, c // 8:]
Traceback (most recent call last):

  File "main.py", line 386, in <module>
    main()

  File "main.py", line 282, in main
    executor, ctx = get_executor()

  File "main.py", line 96, in get_executor
    return torch2executor(torch_module, torch_inputs, target)

  File "main.py", line 52, in torch2executor
    graph, tvm_module, params = torch2tvm_module(torch_module, torch_inputs, target)

  File "main.py", line 31, in torch2tvm_module
    torch.onnx.export(torch_module, torch_inputs, buffer, input_names=input_names, output_names=["o" + str(i) for i in range(len(torch_inputs))])

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/__init__.py", line 148, in export
    strip_doc_string, dynamic_axes, keep_initializers_as_inputs)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/utils.py", line 66, in export
    dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/utils.py", line 416, in _export
    fixed_batch_size=fixed_batch_size)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/utils.py", line 296, in _model_to_graph
    fixed_batch_size=fixed_batch_size, params_dict=params_dict)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/utils.py", line 135, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/__init__.py", line 179, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/utils.py", line 657, in _run_symbolic_function
    return op_fn(g, *inputs, **attrs)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/symbolic_helper.py", line 129, in wrapper
    return fn(g, *args)

  File "/home/bm/.local/lib/python3.6/site-packages/torch/onnx/symbolic_opset9.py", line 1311, in slice
    raise RuntimeError('Unsupported: ONNX export of Slice with dynamic inputs. DynamicSlice '

RuntimeError: Unsupported: ONNX export of Slice with dynamic inputs. DynamicSlice is a deprecated experimental op. Please use statically allocated variables or export to a higher opset version.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 29

Most upvoted comments

I have solved this problem. The solution is to export torch model with opset 10 instead of default opset 9, TVM does not really support some operators in opset 9. Additionally, using a tool “onnx-simplifier” would be great help.

Hi @poincarelee, how did you install onnx-simplify? I get an error with the onnxruntime dependency:

pip3 install 'onnx-simplifier==0.2.9'

Collecting onnx-simplifier==0.2.9
  Using cached https://files.pythonhosted.org/packages/19/f1/4b188f4dacc45b69e48605f5232a9f22616656fd189dcc04369b516d818b/onnx-simplifier-0.2.9.tar.gz
Collecting onnx (from onnx-simplifier==0.2.9)
Collecting onnxruntime>=1.2.0 (from onnx-simplifier==0.2.9)
  Could not find a version that satisfies the requirement onnxruntime>=1.2.0 (from onnx-simplifier==0.2.9) (from versions: )
No matching distribution found for onnxruntime>=1.2.0 (from onnx-simplifier==0.2.9)

did you compile onnxruntime from source?

edit:

I did have to compile onnxruntime, I used branch 1.4. I installed it after building the wheels file. Additionally I compiled it with CUDA support, and then had to change onnx-simplifier to depend on for onnxruntime-gpu. After that I managed to install onnx-simplifier.

After that I added the changes suggested above, and it finally worked.