openvino: [Bug] Operator support Col2Im

System information (version)
  • OpenVINO=> 2022.2.0
  • Operating System / Platform => Ubuntu20.04
  • Compiler => gcc 9.4.0
  • Problem classification => Model Conversion
  • Framework: Pytorch
  • Model name: torch.nn.Fold
Detailed description

Now Col2Im ops is supported by ONNX opset version18 (Add Col2Im operator), but not supported by openvino inference engine yet.

Looking for support to at new operator in Openvino, I found that how_to_add_op.md is not finished yet.

It seems that the new ONNX operator should be implemented under src/frontends/onnx/frontend/src/op/, I see most of the operator defined here using combination of default_opset::XX/ngraph::opsetN::XX, but Col2Im operator contains much operations like move element from somewhere of input matrix to somewhere of output matrix.

Do you have plans to support Col2Im recently?

Steps to reproduce
>>> from openvino.inference_engine import IECore
>>> ie = IECore()
>>> net_onnx = ie.read_network(model="/opt/mnt/onnx_model/model_contains_col2im_ops.onnx")
[WARN] 2022-10-17T09:27:51z frontends/onnx/frontend/src/ops_bridge.cpp 237      Currently ONNX operator set version: 18 is unsupported. Falling back to: 16
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "ie_api.pyx", line 367, in openvino.inference_engine.ie_api.IECore.read_network
  File "ie_api.pyx", line 410, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Check 'unknown_operators.empty()' failed at frontends/onnx/frontend/src/core/graph.cpp:209:
OpenVINO does not support the following ONNX operations: Col2Im

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 20 (9 by maintainers)

Most upvoted comments

Hi @MasterHM-ml Please, add linkage with openvino::frontend::onnx in CMakeLists.txt:

target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx)

BTW, users extensions are not supposed to use NGRAPH_TYPE_CASE which is OpenVINO internal. If you remove its usage with direct C++ switch / case, you don’t have to deal with custom headers files and include paths. All you need is to link with openvino::frontend::onnx

@sgolebiewski-intel can we add in this section https://docs.openvino.ai/2023.0/openvino_docs_Extensibility_UG_Frontend_Extensions.html#mapping-custom-operations-to-frontends-with-openvino-framework-map-macro that if users used OPENVINO_FRAMEWORK_MAP macro to map operation on framework, users have to link appropriate frontend in cmake. For the example if docs, users have to add:

target_link_libraries(${TARGET_NAME} PRIVATE openvino::frontend::onnx openvino::frontend::tensorflow openvino::frontend::paddle)

@wrchen-voxel, as @andrei-kochin mentioned it is good to start adding an implementation as an operation extension first without digging into OpenVINO source tree. It will help you to focus at the operation code itself and you can test it in “user space”, so to speak, with real models.

Let me add more details to help you.

For col2im it looks like you have to add a C++ implementation for the operation because just briefly looking at implementation in Pytorch (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/im2col.h#L88) I cannot say how to easily decompose it to existing OpenVINO ops. If you know the decomposition, it will simplifies the enabling according to https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_Frontend_Extensions.html#mapping-to-multiple-operations-with-conversionextension. There are similar approach in Python as well.

Suppose we don’t know how to decompose to existing ops, in this case follow guide https://docs.openvino.ai/latest/openvino_docs_Extensibility_UG_add_openvino_ops.html#doxid-openvino-docs-extensibility-u-g-add-openvino-ops to implement a new OpenVINO op class. You have to implement in C++ and I think that the code from Pytorch which was referenced above can be modified to implement your new op evaluate method that will be called from within CPU plugin during model inference. You can actually grab some fragments from real ops here: https://github.com/openvinotoolkit/openvino/tree/master/src/core/include/openvino/op.

When implementing the class, enable direct mapping from ONNX by adding two lines of code within your class scope, like in this test: https://github.com/openvinotoolkit/openvino/blob/master/src/frontends/onnx/tests/op_extension.cpp#L21-L22. It will automatically enable conversion from ONNX operator to this new OV op. Please assign identical name for OV op as ONNX operator name to unlock this functionality. Then you can try to load a model with this ONNX op in read_model after adding the extension class to the core according to the guide. If you don’t mind please use C++ environment for the time when new op is being developed, it will save you additional steps to prepare library with op and loading it in Python. Later you can do those additional steps and continue using Python.

If everything is fine with enabling it as an extension op, then we can continue talking about contributing the op directly to OV.

Got it! Here is the output extension ud.tar.gz I implemented. I have tried the 1batch 0pad situation, it seems the model speed up is not so obvious…