opencv: Inference engine fails to infer with a different input size

I’m trying to infer a model on images bigger (704²) than what it was trained with (256²) which should not be a problem since it’s fully convolutional. While it works fine with the DNN_BACKEND_OPENCV, it fails with DNN_BACKEND_INFERENCE_ENGINE

OpenCV: terminate handler is called! The last OpenCV error is: OpenCV(4.3.0-dev) Error: Unspecified error (Failed to initialize Inference Engine backend (device = CPU): Incorrect dimensions for broadcasting for Mul_14) in cv::dnn::InfEngineBackendNet::initPlugin, file D:\Dev\opencv\modules\dnn\src\op_inf_engine.cpp, line 881

The model is timm-efficientnet-b4 https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/efficientnet.py image

This is with OpenVino 2019 R3, I tried converting my onnx with 2020.2 and recompiling OpenCV with 2020.2, and it doesn’t help. However it works I load the onnx instead of the bin/xml. Is there any chance to fix it with 2019 R3?

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 24 (24 by maintainers)

Most upvoted comments

@dkurt I have same speed with 2020.4, no improvement unfortunately

[ INFO ] InferenceEngine:
         API version............. 2.1.2020.4.0-359-21e092122f4-releases/2020/4
[ INFO ] Device info
         CPU
         MKLDNNPlugin............ version 2.1
         Build................... 2020.4.0-359-21e092122f4-releases/2020/4
Count:      658 iterations
Duration:   60044.60 ms
Latency:    88.22 ms
Throughput: 11.33 FPS