TensorRT: Stand-alone pad operation fails with: Assertion failed: inputs.at(1).is_weights()
Description
My current workflow is pytorch -> onnx -> tensorrt
and I encounter an issue with the nn.ConstantPad2D
operation that results in the following error:
While parsing node number 23 [Pad -> "24"]:
--- Begin node ---
input: "input"
input: "22"
input: "23"
output: "24"
op_type: "Pad"
attribute {
name: "mode"
s: "constant"
type: STRING
}
--- End node ---
ERROR: /mypath/onnx-tensorrt/builtin_op_importers.cpp:2106 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights()
[03/12/2020-09:06:29] [E] Failed to parse onnx file
[03/12/2020-09:06:29] [E] Parsing model failed
[03/12/2020-09:06:29] [E] Engine creation failed
[03/12/2020-09:06:29] [E] Engine set up failed
Environment
OS: Ubuntu 18.04 torch: 1.4.0 onnx: 1.6.0 tensorrt: 7.0.0 cuda: 10.0 python: 2.7
Steps To Reproduce
# github_repro_example.py
# -----------
import onnx
import argparse
import torch
import torch.nn as nn
class MinimalModel(nn.Module):
def __init__(self):
super(MinimalModel, self).__init__()
self.constant_zero_pad = nn.ConstantPad2d((1, 0, 0, 0), 0)
def forward(self, input_tensor):
return self.constant_zero_pad(input_tensor)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='PSMNet')
parser.add_argument('output_onnx')
args = parser.parse_args()
minimal_model = MinimalModel()
minimal_model = nn.DataParallel(minimal_model)
minimal_model.cuda()
# Random deep feature
input_tensor = torch.rand((1, 32, 128, 128))
# Check model can do a forward pass
minimal_model(input_tensor)
# Export to onnx
torch.onnx.export(
minimal_model.module,
(input_tensor),
args.output_onnx,
export_params=True, verbose=True, training=False, opset_version=11
)
original_model = onnx.load(args.output_onnx)
onnx.checker.check_model(original_model)
Run with:
python2 github_repro_example.py ./test.onnx
Run it through tensorrt
trtexec --explicitBatch --onnx ./test.onnx --verbose
Which will result in the above error.
Related issues
https://github.com/onnx/onnx-tensorrt/issues/378 https://github.com/onnx/onnx-tensorrt/issues/411
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 25
Half a year later, issue still persists.
same issue, anyone has solved this?
Sorry @romain87400 , this command can help to fold constants to solve some
Assertion failed: inputs.at(1).is_weights()
failure, but we still only support constant 0 padding.@ttyio I’m try
polygraphy surgeon sanitize model.onnx --fold-constants --output model_folded.onnx
I’m come back on the first error[8] Assertion failed: mode == "constant" && value == 0
. When i use onnxsim to corrected[8] Assertion failed: inputs.at(1).is_weights()
i come back on this error[8] Assertion failed: mode == "constant" && value == 0
. Do you have a solution on this problem ? I work on Nvidia Xavier NX, Jetpack 4.4 with tensorRT 7.1.3.@opeide @dedoogong @wkl2013DeepVision
Could you try
We have document here https://github.com/onnx/onnx-tensorrt/blob/master/docs/faq.md#common-assertion-errors thanks
Hi @rmccorm4, after applying your instructions mentioned on issues #386 and #439 I got new error, some idea how to fix it?:
Perhaps the constant-folding functionality of the current torch2onnx export doesn’t support this particular structure yet. Pre-opset 11 the
pads
input of the padding node was an attribute instead of an input, can you try exporting to opset-10 and inspecting the resulting graph?For padding, the ONNX-TRT parser expects the padded values to be initializers (i.e constants) in the ONNX graph. I checked @rmccorm4’s zip package and the nodes that were contributing to the pad dimensions were constant-folded into an initializer by the onnx-simplifier.
Hi @copaah ,
As for this error:
PyTorch generated kind of a funky ONNX graph for this simple model. I don’t know if this is an issue to be fixed on their part, or on the ONNX parser’s part, @kevinch-nv might be able to answer that.
As a workaround to your issue, you can try using
onnx-simplifier
on the onnx model and then parsing that, which worked for me.