onnx-tensorrt: Stand-alone pad operation fails with: Assertion failed: inputs.at(1).is_weights()
Versions
OS: Ubuntu 18.04 torch: 1.4.0 onnx: 1.6.0 tensorrt: 7.0.0 cuda: 10.0 python: 2.7
Issue
My current workflow is pytorch -> onnx -> tensorrt and I encounter an issue with the nn.ConstantPad2D operation that results in the following error:
While parsing node number 23 [Pad -> "24"]:
--- Begin node ---
input: "input"
input: "22"
input: "23"
output: "24"
op_type: "Pad"
attribute {
name: "mode"
s: "constant"
type: STRING
}
--- End node ---
ERROR: /mypath/onnx-tensorrt/builtin_op_importers.cpp:2106 In function importPad:
[8] Assertion failed: inputs.at(1).is_weights()
[03/12/2020-09:06:29] [E] Failed to parse onnx file
[03/12/2020-09:06:29] [E] Parsing model failed
[03/12/2020-09:06:29] [E] Engine creation failed
[03/12/2020-09:06:29] [E] Engine set up failed
Minimal reproducing example
github_repro_example.py
-----------
import onnx
import argparse
import torch
import torch.nn as nn
class MinimalModel(nn.Module):
def __init__(self):
super(MinimalModel, self).__init__()
self.constant_zero_pad = nn.ConstantPad2d((1, 0, 0, 0), 0)
def forward(self, input_tensor):
return self.constant_zero_pad(input_tensor)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='PSMNet')
parser.add_argument('output_onnx')
args = parser.parse_args()
minimal_model = MinimalModel()
minimal_model = nn.DataParallel(minimal_model)
minimal_model.cuda()
# Random deep feature
input_tensor = torch.rand((1, 32, 128, 128))
# Check model can do a forward pass
minimal_model(input_tensor)
# Export to onnx
torch.onnx.export(
minimal_model.module,
(input_tensor),
args.output_onnx,
export_params=True, verbose=True, training=False, opset_version=11
)
original_model = onnx.load(args.output_onnx)
onnx.checker.check_model(original_model)
Run with:
python2 github_repro_example.py ./test.onnx
Run it through tensorrt
trtexec --explicitBatch --onnx ./test.onnx --verbose
Which will result in the above error.
Related issues
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 5
- Comments: 16
I tried all the tricks here but still get this error message. Very simple code (modified from test) to reproduce:
Check that the model runs well
The following code generates builtin_op_importers.cpp:2220 In function importPad: [8] Assertion failed: inputs.at(1).is_weights()
Oh, this is a duplicate of https://github.com/NVIDIA/TensorRT/issues/439#issuecomment-604252223, where I already tried playing around with folding the constants. It seems like there’s an issue with the 2nd input to the pad op, which should get constant-folded, but it’s not happening in the torch export. I believe onnx-simplifier will do that for you, but I’m not sure if it will work on the real/larger model.
Note the difference in the graphs here: https://github.com/pytorch/pytorch/issues/35516. You can try pinging this bug to get some attention to it
Please see the discussion in the other issue, and close this issue as a duplicate.
Could you please share your code snippet?
oppset 9 do not work for me
Applying onnx-simplifier is a challenge in intself, see https://github.com/daquexian/onnx-simplifier/issues/91