openvino: [Bug] Could not run model optimizer with ONNX model
System information (version)
- OpenVINO => 2022.1 (docker container dlstreamer/dlstreamer)
- Operating System / Platform => Ubuntu 22.04 LTS
- Problem classification: Model Conversion
- Framework: TensorFlow
- Model name: Custom RNN model
Detailed description
Hello. I’m running dlstreamer/dlstreamer on a fitlet2 device https://fit-iot.com/web/products/fitlet2/ running Ubuntu 22.04. My goal is to run my custom model exported from Tensorflow (keras backend) to ONNX format and integrate it into the dlstreamer framework in order to compile custom C++ code that users gstreamer plugins and run inference on this custom model. Model is exported directly from tensorflow 1.15.5 running on Nvidia GPU after training for about 62 epochs. There is a problem when parsing a certain convolution layer. It says that the dilation size is greater than the padding size. Could you provide some help? What do I have to do by now? Do I have to keep the model and change export settings or do I have to use another exporting technique such as frozen model? Thank you in advance.
mo -w engine_file.onnx --input_shape [1,3,48,96] --input image_input --output tf_op_layer_ArgMax
Model Optimizer arguments:
Common parameters:
- Path to the Input Model: /home/engine_file.onnx
- Path for generated IR: /home/.
- IR output name: engine_file
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: image_input
- Output layers: tf_op_layer_ArgMax
- Input shapes: [1,3,48,96]
- Source layout: Not specified
- Target layout: Not specified
- Layout: Not specified
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- User transformations: Not specified
- Reverse input channels: False
- Enable IR generation for fixed input shape: False
- Use the transformations config file: None
Advanced parameters:
- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False
- Force the usage of new Frontend of Model Optimizer for model conversion into IR: False
OpenVINO runtime found in: /usr/local/lib/python3.8/dist-packages/openvino
OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1
[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ] While validating ONNX node '<Node(Conv): res3a_branch2a>':
Check 'window_dilated_dim <= data_padded_dilated_dim' failed at core/shape_inference/include/convolution_shape_inference.hpp:209:
While validating node 'v1::Convolution Convolution_460 (re_lu_4/Relu:0[0]:f32{1,64,1,1}, res3a_branch2a_W_new[0]:f32{128,64,3,3}) -> (dynamic...)' with friendly_name 'Convolution_460':
Window after dilation has dimension (dim: 3) larger than the data shape after padding (dim: 2) at axis 0.
[ ERROR ] Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/mo/main.py", line 533, in main
ret_code = driver(argv)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/mo/main.py", line 489, in driver
graph, ngraph_function = prepare_ir(argv)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/mo/main.py", line 394, in prepare_ir
ngraph_function = moc_pipeline(argv, moc_front_end)
File "/usr/local/lib/python3.8/dist-packages/openvino/tools/mo/moc_frontend/pipeline.py", line 147, in moc_pipeline
ngraph_function = moc_front_end.convert(input_model)
RuntimeError: While validating ONNX node '<Node(Conv): res3a_branch2a>':
Check 'window_dilated_dim <= data_padded_dilated_dim' failed at core/shape_inference/include/convolution_shape_inference.hpp:209:
While validating node 'v1::Convolution Convolution_460 (re_lu_4/Relu:0[0]:f32{1,64,1,1}, res3a_branch2a_W_new[0]:f32{128,64,3,3}) -> (dynamic...)' with friendly_name 'Convolution_460':
Window after dilation has dimension (dim: 3) larger than the data shape after padding (dim: 2) at axis 0.
[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------
Steps to reproduce
Here is the link to the model: Model
Issue submission checklist
- I report the issue, it’s not a question
- I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
- There is reproducer code and related data files: images, videos, models, etc.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 30 (15 by maintainers)
@mbencer Yes now it works! Thank you a lot for following me in these months! Thank you so much. I can confirm it works both with model optimizer and benchmark_app. In this case the activation is Sigmoid, finally and the output layer is a softmax in place of [tf_op_layer_Max, tf_op_layer_ArgMax] but i’ll figure it out by myself on how to adapt the model to the openvino environment. Thank you so much.
Hi @OscarPedaVendere, I’ve reproduced converting on my side and I think I have a solution. When I explicitly define the target opset version to 12, like:
everything work. I’ve tested it with
pip install tensorflow==1.15.5 keras==2.2.4 keras2onnx==1.7.0 onnxusingpython 3.7.In such opset version axes can be passed to ReduceSum as an attribute and LSTM is created with Sigmoid instead unsupported HardSigmoid.
Confirmed on the direct inference via benchmark_app (
./benchmark_app -m model.onnx --shape [1,3,48,96]) and using ModelOptmizer (mo -w model.onnx --input_shape [1,3,48,96] --input image_input --output tf_op_layer_ArgMax)Please let me know if such solution works for you.
@mbencer Thank you for your replies.
This model is part of a larger library that would not make sense to export as a whole. I’ve created a zip and run just a check if everything needed is there. To me it is not feasible atm to extract and check the whole library. I guess this could be allright anyway. Here’s the zip.
openvino_bugfix.zip
The experiment_spec is a class that you can initialize with the collections.namedtuple() function after reading the contents in the specs/arabic_spec.txt file. That should be it; let me know if you have some errors while loading the spec.
Hi @tomdol I have created JIRA for this case;
Ref : 94180