TensorRT: ERROR: builtin_op_importers.cpp:2179 In function importPad: [8] Assertion failed: inputs.at(1).is_weights()

Description

My current workflow is pb -> onnx -> tensorrt. Thanks to @jignparm , (refer https://github.com/onnx/tensorflow-onnx/issues/994) I finally converted the original pb to onnx model. But I received a importPad error when converting onnx to tensorrt engine. It seems is a TensorRT issue according to jignparm.

[V] [TRT] ModelImporter.cpp:125: preprocessor/PadV2 [Pad] inputs: [preprocessor/resize/ResizeBilinear:0 -> (-1, -1, -1, -1)], [preprocessor/PadV2__135:0 -> (8)], [preprocessor/PadV2/constant_values:0 -> ()], ERROR: builtin_op_importers.cpp:2179 In function importPad: [8] Assertion failed: inputs.at(1).is_weights()

Environment

TensorRT Version: 7.1.3.4 GA GPU Type: GTX 1080Ti Nvidia Driver Version: 440.33.01 CUDA Version: 10.2 CUDNN Version: 8 Operating System + Version: Ubuntu 16.04 Python Version (if applicable): 3.7 TensorFlow Version (if applicable): 1.15 Onnx Version (if applicable): 1.6.0 TensorFlow-onnx Version (if applicable): 1.6.2

Relevant Files

The onnx file can be download at https://yadi.sk/d/NW5no-n0HtNECw. The original pb file can be download at https://yadi.sk/d/PN3tDlm6GzwSpw

Steps To Reproduce

  1. edit the pb and set the initializer to a type TRT can handle (float64->float32):
import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
import numpy as np
from PIL import Image
with tf.Session() as sess:
    # First deserialize your frozen graph:
    with tf.gfile.GFile('models/rjcd_13_0.1421_0.1290.pb', 'rb') as f:
        frozen_graph = tf.GraphDef()
        frozen_graph.ParseFromString(f.read())
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(frozen_graph, name='')
    new_model = tf.GraphDef()
    with tf.Session(graph=graph) as sess:
        for n in sess.graph_def.node:
            if n.name == 'resize_information_computer/truediv/Cast' or n.name == 'resize_information_computer/truediv/Cast_1':
                nn = new_model.node.add()
                nn.CopyFrom(n)
                nn.attr['DstT'].CopyFrom(tf.AttrValue(type=tf.float32.as_datatype_enum))
            elif n.name == 'resize_information_computer/truediv':
                nn = new_model.node.add()
                nn.CopyFrom(n)
                nn.attr['T'].CopyFrom(tf.AttrValue(type=tf.float32.as_datatype_enum))
            elif n.name == 'resize_information_computer/ToFloat':
                nn = new_model.node.add()
                nn.CopyFrom(n)
                nn.attr['SrcT'].CopyFrom(tf.AttrValue(type=tf.float32.as_datatype_enum))
            else:
                nn = new_model.node.add()
                nn.CopyFrom(n)
    with tf.gfile.GFile('models/rjcd_13_0.1421_0.1290_new.pb', mode='wb') as f:
        f.write(new_model.SerializeToString())
  1. convert pb to onnx
python -m tf2onnx.convert --graphdef rjcd_13_0.1421_0.1290_new.pb --output rjcd_13_0.1421_0.1290.onnx --inputs image_input:0,max_detections:0,iou_threshold:0,score_threshold:0 --outputs output/Squeeze:0 --opset 11 --fold_const
  1. convert onnx to tensorrt engine
trtexec --onnx=rjcd_13_0.1421_0.1290.onnx --explicitBatch

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 31

Most upvoted comments

internet if flooding with the requests for solution :] . this would help to convert maskrnn, fasterrcnn, retinanet and many other networks.

Facing same issue. @mk-nvidia any progress ?

Why this issuse is closed ? STILL WRONG

Solved it using onnx-simplifier

Thanks for reporting, we’ll take a look.