onnxruntime: Fatal error: _DCNv2 is not a registered function/op

Describe the bug I do have onnx model [with custom operator added] followed by https://github.com/onnx/onnx/issues/3544 and even following code is working.

onnx.checker.check_model(‘/home/uib43225/DEFT/src/models/model_mot.onnx’) print(onnx.helper.printable_graph(onnx_model.graph))

But now I am trying to inference using onnxruntime. But I am getting following error:

  File "onnx_run_time.py", line 12, in <module>
    rt_s = rt.InferenceSession("/home/uidq6830/PycharmProjects/DEFT_inference/src/models/model_mot.onnx")
  File "/home/uidq6830/deftortpip/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 283, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/uidq6830/deftortpip/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 310, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from /home/uidq6830/PycharmProjects/DEFT_inference/src/models/model_mot.onnx failed:Fatal error: _DCNv2 is not a registered function/op

Urgency Please help me out with it, I am looking for strong support as I have a very strict deadline to fix it.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 18.04LTS
  • ONNX Runtime installed from (source or binary): 1.8.1
  • ONNX Runtime version: 1.10.1
  • Python version: 3.8.8
  • Visual Studio version (if applicable):
  • GCC/Compiler version (if compiling from source): 7.3.0
  • CUDA/cuDNN version: 10.2
  • GPU model and memory: Titan XP; 12192MiB

To Reproduce I have installed onnxruntime using pip “pip install onnxruntime” and I will have to add new custom operator to fix it.

NOTE: I already have posted this issue https://github.com/microsoft/onnxruntime/issues/8436 but have not recieved any reponse.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 44 (17 by maintainers)

Most upvoted comments

@BowenBao, @wenbingl, @ytaous, @edgchen1: Thank you so much to all. Much appreciated. Yes, I have solved the issue and able to convert it sucessfully from torch to onnx and able to add an operator to onnxruntime for an inference too.

The solution may help to others:

  1. Load the pytorch compatible DCNv2 from github https://github.com/lbin/DCNv2
  2. Follow the instructions this to add an operator to pytorch to convert to onnx model.
  3. Follow the same to add an operator to onnxruntime for inference.

+others & -above mentioned name: Please ask for help If anyone needs to fix it. I have struggled alot so, I am here to help you too. Thank you.

Hey @prabhuiitdhn, I see you have put so much effort on DCN_v2 export. I am also having hard time exporting this.

I am using following code for DCN_V2 class https://github.com/jinfagang/DCNv2_latest/blob/master/dcn_v2_onnx.py

where I have changed the symbolic function like this

class _DCNv2(Function):

    @staticmethod
    def symbolic(g, input, offset_mask, weight, bias, stride, padding, dilation, deformable_groups):
        return g.op("ai.onnx.contrib::_DCNv2", input, offset_mask, weight, bias, name_s="DCNv2", info_s=json.dumps({
            "dilation": dilation,
            "padding": padding,
            "stride": stride,
            "deformable_groups": deformable_groups
        }))

To export I am using -

torch.onnx.export(net, torch.randn(1,3,W,H), "custom_model.onnx", opset_version=13,
                custom_opsets={"ai.onnx.contrib": 13})

I could export this But when I am trying to run with onnxruntime I am facing the same error error: _DCNv2 is not a registered function/op

I am using this piece of code

 import onnxruntime as _ort
from onnxruntime_extensions import (
            onnx_op, PyCustomOpDef,
            get_library_path as _get_library_path)
# from onnxruntime.tools import pytorch_export_contrib_ops
# pytorch_export_contrib_ops.register()


@onnx_op(op_type='_DCNv2', domain='ai.onnx.contrib',
         inputs=[PyCustomOpDef.dt_float, PyCustomOpDef.dt_float, PyCustomOpDef.dt_float, PyCustomOpDef.dt_float,
                 PyCustomOpDef.dt_float], outputs=[PyCustomOpDef.dt_float])
def _DCNv2(x, y, z, p, q):
    return q

so = _ort.SessionOptions()
so.register_custom_ops_library(_get_library_path())

sess = _ort.InferenceSession("model.onnx", None)

Can you help me resolving this ?

I did figure out. I needed to pass session_options while creating ort session. sess = _ort.InferenceSession("model.onnx", None) to sess = _ort.InferenceSession("model.onnx", so)

@prabhuiitdhn Hi, could you share the def _DCNv2() code and inference script if possible? Thanks!