jetson-inference: Custom ONNX model load problem

Hello dusty! I am very much enjoying your tutorials and grateful for it.

I’ve looked through the issues but I couldn’t find a relevant issue.

I am currently trying to load .onnx model (which is not from one of your tutorials) with detectNet on my custom C++ code. (jetson-inference is correctly included) Using C++ detecNet class, I create a net :net = detectNet::Create(NULL, "path/to/model.onnx", 0.0f, "path/to/labels.txt");. I intend to use the following Create function from source code:

	/**
	 * Load a custom network instance
	 * @param prototxt_path File path to the deployable network prototxt
	 * @param model_path File path to the caffemodel
	 * @param mean_pixel Input transform subtraction value (use 0.0 if the network already does this)
	 * @param class_labels File path to list of class name labels
	 * @param threshold default minimum threshold for detection
	 * @param input Name of the input layer blob.
	 * @param coverage Name of the output coverage classifier layer blob, which contains the confidence values for each bbox.
	 * @param bboxes Name of the output bounding box layer blob, which contains a grid of rectangles in the image.
	 * @param maxBatchSize The maximum batch size that the network will support and be optimized for.
	 */
	static detectNet* Create( const char* prototxt_path, const char* model_path, float mean_pixel=0.0f, 
						 const char* class_labels=NULL, float threshold=DETECTNET_DEFAULT_THRESHOLD, 
						 const char* input = DETECTNET_DEFAULT_INPUT, 
						 const char* coverage = DETECTNET_DEFAULT_COVERAGE, 
						 const char* bboxes = DETECTNET_DEFAULT_BBOX,
						 uint32_t maxBatchSize=DEFAULT_MAX_BATCH_SIZE, 
						 precisionType precision=TYPE_FASTEST,
				   		 deviceType device=DEVICE_GPU, bool allowGPUFallback=true );

The error occurs when the internal code is trying to read input_blobs.

[TRT]    INVALID_ARGUMENT: Cannot find binding of given name: data
[TRT]    failed to find requested input layer data in network
[TRT]    device GPU, failed to create resources for CUDA engine

I noticed that the default argument for input_blob doesn’t work for external .onnx model.

Should I provide a correct input_blob argument to load the model? And how can I know this information? Most of the conversion examples (anymodel to .onnx) don’t provide this information, so I need some help on this with jetson-inference.

Looking forward to your ideas! Thanks 😃

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 18 (2 by maintainers)

Most upvoted comments

I’m having the same problem with a model I converted from PyTorch Yolov5. I get:

[TRT] INVALID_ARGUMENT: Cannot find binding of given name: data [TRT] failed to find requested input layer data in network [TRT] device GPU, failed to create resources for CUDA engine [TRT] failed to create TensorRT engine for models/test/best.onnx, device GPU [TRT] detectNet – failed to initialize. detectnet: failed to load detectNet model

I tried a few other options I had from other models like --input-blob=input_0 but without knowing where to look for this I wasn’t sure.

I am using the aarch64 Jetson Xavier NX.

To support different object detection models in jetson-inference, you would need to add/modify the pre/post-processing code found here:

This should be made to match the pre/post-processing that gets performed on the original model. It also seems like you might need to add support for a 3rd output layer - the previous detection models in jetson-inference used 2 output layers.

Since you are using PyTorch, you might also want to try the torch2trt project - https://github.com/nvidia-ai-iot/torch2trt