onnx-tensorrt: [optimizer.cpp::computeCosts::1981] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Reshape_419 + Transpose_420...Gather_2423]}

[optimizer.cpp::computeCosts::1981] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Reshape_419 + Transpose_420...Gather_2423]}

I have an ONNX model, which can be succesfully converted to trt using tensorrt 7.3, but when I upgrade tensorrt 8 and using onnx-tensorrt with master branch failed.

Any idea how above error comes?

All the ops in my model are:

Exploring on onnx model: detr_sim.onnx_changed.onnx
ONNX model sum on: detr_sim.onnx_changed.onnx


-------------------------------------------
ir version: 7
opset_import: 12 
producer_name: 
doc_string: 
all ops used: Split,Squeeze,Pad,Unsqueeze,Concat,Conv,Mul,Add,Relu,MaxPool,Reshape,Transpose,MatMul,Div,Softmax,ReduceMean,Sub,Pow,Sqrt,Sigmoid,Gather

these ops all supported already.

Full output:

----------------------------------------------------------------
Input filename:   detr_sim.onnx_changed.onnx
ONNX IR version:  0.0.7
Opset version:    12
Producer name:    
Producer version: 
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
[2021-10-11 11:40:22 WARNING] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Building TensorRT engine, FP16 available:1
    Max batch size:     32
    Max workspace size: 2288.71 MiB
[2021-10-11 11:40:40 WARNING] Skipping tactic 0 due to Myelin error: autotuning: CUDA error 3 allocating 0-byte buffer: 
[2021-10-11 11:40:41   ERROR] 10: [optimizer.cpp::computeCosts::1981] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Reshape_419 + Transpose_420...Gather_2423]}.)
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to create object
[1]    264517 abort (core dumped)  onnx2trt detr_sim.onnx_changed.onnx -o detr.trt -w 2399889023

About this issue

Most upvoted comments

I met the same error with trt 8.2.3.0

[01/28/2022-07:54:29] [TRT] [E] 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[113 + (Unnamed Layer* 72) [Shuffle]...(Unnamed Layer* 171) [Shuffle]]}.)

With trt 7.2.3 all works fine.

In both cases i use trt.OnnxParser.

I had a similar issue in tensorrt:22.02-py3 as the following,

[optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node

and solved by increasing the workspace size.

Hello @kevinch-nv, I met the error in env “Tensorrt8.2.5.1+cu11.3+cudnn8.2, with GTX3090.” But it does not appear with “Tensorrt8.2.5.1+cu11.3+cudnn8.2,with GTX1050Ti.” So, i guess the reason is the difference of nvidia-graphics. I also tried the newest Tensor8.4(Tensorrt8.4.0GA+cuda11.6+cudnn8.4), the same error still exist. [07/01/2022-10:46:09] [W] [TRT] Skipping tactic 0x0000000000000000 due to Myelin error: autotuning: CUDA error 3 allocating 0-byte buffer: [07/01/2022-10:46:09] [E] Error[10]: [optimizer.cpp::computeCosts::3628] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[(Unnamed Layer* 126) [Constant] + (Unnamed Layer* 127) [Shuffle]…Unsqueeze_205]}.) [07/01/2022-10:46:09] [E] Error[2]: [builder.cpp::buildSerializedNetwork::636] Error Code 2: Internal Error (Assertion engine != nullptr failed. )

Building an engine from file gpt2-pretrained.onnx; this may take a while...
[02/19/2022-21:14:43] [TRT] [W] Skipping tactic 0 due to insuficient memory on requested size of 1170309120 detected for tactic 0.
[02/19/2022-21:14:43] [TRT] [E] 10: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[transformer.wte.weight...MatMul_2899]}.)
[02/19/2022-21:14:43] [TRT] [E] 2: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )

face the same error with trt 8.2.2.1, and solved by increasing the workspace size.

I have the same error with Tensorrt8.2.3.0+cu11.2+P40 and all process success with Tensorrt8.6.1.6.

I met the same error with trt 8.2.2. Did anyone fix it?