openvino: [Bug]: std::runtime_error when trying to read_model in C++
System information (version)
- OpenVINO Source=> Runtime
- OpenVINO Version=> 2023.0.1
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2019
- Problem classification => Reading model
- Device use: => CPU
- Model name => YOLO-NAS-s
Detailed description
When I try to load the model in C++, be it the ONNX file or the XML file or the BIN file, I get a runtime_error.
Microsoft C++ exception: std::runtime_error at memory location 0x000000D6156FE068.
The model loads fine when I use Python.
I am following the official documentation for loading the model:
ov::Core core;
std::shared_ptr<ov::Model> model = core.read_model(model_path); //exception here
The ONNX model file I am usung:
Steps to reproduce
Issue submission checklist
- I report the issue, it’s not a question
- I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
- There is reproducer code and related data files: images, videos, models, etc.
About this issue
- Original URL
- State: open
- Created a year ago
- Comments: 41 (10 by maintainers)
@avitial You’re right. It does work. I was debugging using Visual Studio and had the exception breakpoints turned on, so it would stop whenever an exception was thrown. But if I disable and choose to ignore the exception, it still works. Tested on 2023.0.1.
These are the exceptions I get:
@avitial I get
std::runtime_errorexception in thecore.read_modelpart when I try to do that in version 2022.3, 2023.0 and 2023.0.1. This only happens in C++. It’s not related to the performance. The model doesn’t load at all. It just throws that exception when I try to load it.The same code however has no issues running in version 2022.1.
Okay, I used the
serializefunction to save to xml and bin. Still throws the same error.I converted by going from Pytorch > Torchscript > IR through
convert_modelusing the latest Python package 2022.3.1, and tested in C++ using the latest 2022.3.1 dll files downloaded from the archive.I couldn’t go from Pytorch > IR because the scripting function used by
convert_modelthrows an error. I had to trace it instead.@Y-T-G the difference is what is the original file you feed to convert_model. If this is .onnx then result will be the same as described here. But what I actually ask is to try to put original torch.nn to convert_model and then save OV model as IR ( .bin + .xml). As input model would be PyTorch format then PyTorch FE will be used not ONNX FE and IR might be slightly different.
Just wanted to make sure if issue is specific to torch.onnx.export or something wrong with conversion to IR. Also if 2022.1 works fine it would be great to feed 2022.1 IR to 2023.0.1 runtime if possible to narrow down the faulty area