TensorRT: virtual nvinfer1::ILayer* nvinfer1::Network::getLayer(int) const: Assertion `layerIndex >= 0' failed.

After solving the [TensorRT] ERROR: Network must have at least one output

Another error happened

The code is:

# The Onnx path is used for Onnx models.
def build_engine_onnx(model_file):
  print('debug1')
  with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
    print('debug2')
    builder.max_workspace_size = common.GiB(1)
    print('debug3')
    # Load the Onnx model and parse it in order to populate the TensorRT network.
    with open(model_file, 'rb') as model:
      print('debug4')
      parser.parse(model.read())
      print('debug5')
    last_layer = network.get_layer(network.num_layers - 1)
    # Check if last layer recognizes it's output
    if not last_layer.get_output(0):
        # If not, then mark the output using TensorRT API
        network.mark_output(last_layer.get_output(0))
    return builder.build_cuda_engine(network)

The output is:

debug1
debug2
debug3
debug4
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 765571023
debug5
python: ../builder/Network.cpp:863: virtual nvinfer1::ILayer* nvinfer1::Network::getLayer(int) const: Assertion `layerIndex >= 0' failed.
Aborted (core dumped)
(tensorrt) nvidia@Dell:~/Desktop/onnx_trt$ 

My pytorch version is 1.3.0. My tensorrt version is 6.0.1.5.

Which situation lead to this problem? Thanks.

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 15

Most upvoted comments

thanks @rmccorm4

the code snippet really helped to debug the issue. following was the error while parsing the onnx model:

In node -1 (importModel): INVALID_VALUE: Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."

i was able resolve this by doing

explicit_batch = 1 << (int)(tensorrt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH)
builder.create_network(explicit_batch)

thanks

The onnx-simplifier is helpful! Thank you!

Hi @mrutyu1987,

Sounds like ONNX parser failed, so some reference to a network layer is returning -1, and therefore getLayer(-1) is raising that error.

I would check the errors thrown by the ONNX parser if any.

You can check these with the Python and C++ APIs like here: https://github.com/rmccorm4/tensorrt-utils/blob/3267d196bd3dc0ddd1f1b9c2364560627f018d43/classification/imagenet/onnx_to_tensorrt.py#L187-L191


I believe trtexec will also output these errors.

Can you share the following outputs?

trtexec --explicitBatch --onnx=<model.onnx>
trtexec --explicitBatch --onnx=<simplified_model.onnx>

If you’re using TRT 7, run those as is. If you’re running an earlier version, remove the --explicitBatch flag