onnx-tensorrt: Stand-alone pad operation fails with: Assertion failed: inputs.at(1).is_weights()

Versions

OS: Ubuntu 18.04 torch: 1.4.0 onnx: 1.6.0 tensorrt: 7.0.0 cuda: 10.0 python: 2.7

Issue

My current workflow is pytorch -> onnx -> tensorrt and I encounter an issue with the nn.ConstantPad2D operation that results in the following error:

While parsing node number 23 [Pad -> "24"]:
--- Begin node ---
input: "input"
input: "22"
input: "23"
output: "24"
op_type: "Pad"
attribute {
  name: "mode"
  s: "constant"
  type: STRING
}

--- End node ---
ERROR: /mypath/onnx-tensorrt/builtin_op_importers.cpp:2106 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights()
[03/12/2020-09:06:29] [E] Failed to parse onnx file
[03/12/2020-09:06:29] [E] Parsing model failed
[03/12/2020-09:06:29] [E] Engine creation failed
[03/12/2020-09:06:29] [E] Engine set up failed

Minimal reproducing example

github_repro_example.py
-----------
import onnx
import argparse
import torch
import torch.nn as nn

class MinimalModel(nn.Module):
    def __init__(self):
        super(MinimalModel, self).__init__()
        self.constant_zero_pad = nn.ConstantPad2d((1, 0, 0, 0), 0)

    def forward(self, input_tensor):
        return self.constant_zero_pad(input_tensor)

if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='PSMNet')
    parser.add_argument('output_onnx')
    args = parser.parse_args()

    minimal_model = MinimalModel()
    minimal_model = nn.DataParallel(minimal_model)
    minimal_model.cuda()

    # Random deep feature
    input_tensor = torch.rand((1, 32, 128, 128))
    # Check model can do a forward pass
    minimal_model(input_tensor)
    # Export to onnx
    torch.onnx.export(
        minimal_model.module,
        (input_tensor),
        args.output_onnx,
        export_params=True, verbose=True, training=False, opset_version=11
    )

    original_model = onnx.load(args.output_onnx)
    onnx.checker.check_model(original_model)

Run with:

python2 github_repro_example.py ./test.onnx

Run it through tensorrt

trtexec --explicitBatch --onnx ./test.onnx --verbose

Which will result in the above error.

Related issues

https://github.com/onnx/onnx-tensorrt/issues/378

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 5
  • Comments: 16

Most upvoted comments

I tried all the tricks here but still get this error message. Very simple code (modified from test) to reproduce:

import onnxruntime as rt
import numpy as np
import onnx
from onnx import version_converter
from onnx import AttributeProto, TensorProto, GraphProto, helper

node = onnx.helper.make_node('Pad',inputs=['x', 'pads'], outputs=['y'], mode='constant')

x = helper.make_tensor_value_info('x',TensorProto.FLOAT,[1,3,4,5])
pads =helper.make_tensor_value_info('pads',TensorProto.INT64,[8])
y=helper.make_tensor_value_info('y',TensorProto.FLOAT,[1, 3, 7, 12])
graph_def = helper.make_graph([node],"pad-model",[x,pads],[y])

model_def = helper.make_model(graph_def, producer_name='pad-model')
model_def.opset_import[0].version = 11

onnx.save(model_def,'pad_model.onnx')

Check that the model runs well

import onnxruntime as rt
sess = rt.InferenceSession('pad_model.onnx')
np_x = np.random.randn(1, 3, 4, 5).astype(np.float32)
np_pads = np.array([0, 0, 1, 3, 0, 0, 2, 4]).astype(np.int64)
np_value =np.array([1.2]).astype(np.float32)
np_y = pad_impl(np_x,np_pads,'constant',1.2)
import numpy as np
y = sess.run(['y'],{"x": np_x, 'pads':np_pads})

The following code generates builtin_op_importers.cpp:2220 In function importPad: [8] Assertion failed: inputs.at(1).is_weights()

trtexec --onnx=./onnx_models/pad_model.onnx --saveEngine=nms.trt

Oh, this is a duplicate of https://github.com/NVIDIA/TensorRT/issues/439#issuecomment-604252223, where I already tried playing around with folding the constants. It seems like there’s an issue with the 2nd input to the pad op, which should get constant-folded, but it’s not happening in the torch export. I believe onnx-simplifier will do that for you, but I’m not sure if it will work on the real/larger model.

Note the difference in the graphs here: https://github.com/pytorch/pytorch/issues/35516. You can try pinging this bug to get some attention to it

Please see the discussion in the other issue, and close this issue as a duplicate.

@cvhuang I also fixed this error that all have same errors: inputs.at(1).is_weights(). by setting --opset 9, you are right.

Could you please share your code snippet?

import torch
import torchvision

model = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=False)
model.eval()
x = [torch.rand(3, 300, 300)]
torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 9)
model_simp, check = onnxsim.simplify('mask_rcnn.onnx')

oppset 9 do not work for me

Traceback (most recent call last):
  File "retinaexport.py", line 9, in <module>
    torch.onnx.export(model, x, "mask_rcnn.onnx", opset_version = 9)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/__init__.py", line 230, in export
    custom_opsets, enable_onnx_checker, use_external_data_format)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 91, in export
    use_external_data_format=use_external_data_format)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 639, in _export
    dynamic_axes=dynamic_axes)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 421, in _model_to_graph
    dynamic_axes=dynamic_axes, input_names=input_names)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 203, in _optimize_graph
    graph = torch._C._jit_pass_onnx(graph, operator_export_type)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/__init__.py", line 263, in _run_symbolic_function
    return utils._run_symbolic_function(*args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/utils.py", line 934, in _run_symbolic_function
    return symbolic_fn(g, *inputs, **attrs)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_opset9.py", line 912, in constant_pad_nd
    padding = _convert_padding_node(padding)
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_opset9.py", line 902, in _convert_padding_node
    return sym_help._onnx_opset_unsupported_detailed('Pad', 9, 11, 'The sizes of the padding must be constant')
  File "/usr/local/lib/python3.6/dist-packages/torch/onnx/symbolic_helper.py", line 196, in _onnx_opset_unsupported_detailed
    'opset {}. {}. Please try opset version {}.'.format(op_name, current_opset, reason, supported_opset))
RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant. Please try opset version 11.

Applying onnx-simplifier is a challenge in intself, see https://github.com/daquexian/onnx-simplifier/issues/91