TensorRT: ONNX/TensorRT conversion failure for Mask-RCNN model

Description

Am getting shape issues when I try to convert Detectron Mask-RCNN model to onnx (and then to TensorRT), despite following the guide here.

Environment

TensorRT Version: from source NVIDIA GPU: 1 Quadro RTX 6000 NVIDIA Driver Version: 450.51.05 CUDA Version: 11.6 Python Version (if applicable): 3.9.12 PyTorch Version (if applicable): 1.12.1+cu116 Baremetal or Container (if so, version): Baremetal CPU Architecture: x86_64 OS (e.g., Linux): Ubuntu 18.04

Relevant Files

Steps To Reproduce

Followed the steps outlined in this README.md.

Console Log:

(base) adityamishrav5@sixian-ThinkStation-P520:~/Desktop/ir_camera$ python content/TensorRT/samples/python/detectron2/create_onnx.py \
>     --exported_onnx model.onnx \
>     --onnx converted.onnx \
>     --det2_config detectron2/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \
>     --det2_weights detectron/model_final_f10217.pkl \
>     --sample_image new_2.jpg
WARNING:root:Pytorch pre-release version 1.13.0a0+gitd2b8b8f - assuming intent to test it
WARNING:root:Pytorch pre-release version 1.13.0a0+gitd2b8b8f - assuming intent to test it
INFO:ModelHelper:ONNX graph loaded successfully
INFO:ModelHelper:Number of FPN output channels is 256
INFO:ModelHelper:Number of classes is 80
INFO:ModelHelper:First NMS max proposals is 1000
INFO:ModelHelper:First NMS iou threshold is 0.7
INFO:ModelHelper:First NMS score threshold is 0.01
INFO:ModelHelper:First ROIAlign type is ROIAlignV2
INFO:ModelHelper:First ROIAlign pooled size is 7
INFO:ModelHelper:First ROIAlign sampling ratio is 0
INFO:ModelHelper:Second NMS max proposals is 100
INFO:ModelHelper:Second NMS iou threshold is 0.5
INFO:ModelHelper:Second NMS score threshold is 0.05
INFO:ModelHelper:Second ROIAlign type is ROIAlignV2
INFO:ModelHelper:Second ROIAlign pooled size is 14
INFO:ModelHelper:Second ROIAlign sampling ratio is 0
INFO:ModelHelper:Individual mask output resolution is 28x28
INFO:ModelHelper:ONNX graph input shape: [1, 3, 1344, 1344] [NCHW format set]
INFO:ModelHelper:Found Sub node
INFO:ModelHelper:Found Div node
INFO:ModelHelper:Found Conv node
/home/adityamishrav5/Desktop/ir_camera/pytorch/torch/functional.py:482: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /home/adityamishrav5/Desktop/ir_camera/pytorch/aten/src/ATen/native/TensorShape.cpp:3071.)
  return _VF.meshgrid(tensors, **kwargs)  # type: ignore[attr-defined]
Traceback (most recent call last):
  File "/home/adityamishrav5/Desktop/ir_camera/content/TensorRT/samples/python/detectron2/create_onnx.py", line 658, in <module>
    main(args)
  File "/home/adityamishrav5/Desktop/ir_camera/content/TensorRT/samples/python/detectron2/create_onnx.py", line 639, in main
    det2_gs.process_graph(anchors, args.first_nms_threshold, args.second_nms_threshold)
  File "/home/adityamishrav5/Desktop/ir_camera/content/TensorRT/samples/python/detectron2/create_onnx.py", line 625, in process_graph
    p2, p3, p4, p5 = backbone()
  File "/home/adityamishrav5/Desktop/ir_camera/content/TensorRT/samples/python/detectron2/create_onnx.py", line 437, in backbone
    first_RN_H = first_resnear_input.outputs[0].shape[2]*2.0
AttributeError: 'NoneType' object has no attribute 'outputs'

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 28 (8 by maintainers)

Most upvoted comments

@patil-506 Btw, I’ve never guaranteed that model will work on a Jetson. I think it will, I just haven’t tested it myself. First, you should run create_onnx.py on your PC, not Jetson. Mostly because it will be extremely hard to satisfy library requirements on ARM device. You should start using Jetson at a step when you are building TRT engine. Second, you are using old TRT. I specifically mention that TRT must be >= 8.4.1. I recommend reading this issue, it is very similar in some sense.