openvino: [Bug] GPU extension
System information (version)
- OpenVINO Source => Runtime
- OpenVINO Version => Version 2023.0
- Operating System / Platform => Windows 64 Bit
- Compiler => Visual Studio 2019 / Cmake
- Problem classification: model load with GPU extension
- Device use: GPU
- Framework: onnx
Detailed description
With OpenVino 2022.2, I have implemented a GPU customization layer, the code like this:
core.set_property("GPU", { { CONFIG_KEY(CONFIG_FILE), xmlPath } }); core.add_extension(vinoParam.customLibPath);
It work correctly.
But when I update OpenVino to 2023.0, it can not read_model successfuly which can not recognizethe custom layer. About the GPU customization layer, do I need to make any changes or upgrades after upgrade to 2023.0?
Issue submission checklist
- I report the issue, it’s not a question
- I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
- There is reproducer code and related data files: images, videos, models, etc.
About this issue
- Original URL
- State: closed
- Created a year ago
- Comments: 33 (15 by maintainers)
@wang7393 I’ve tried the following code based on @mbencer 's test
onnx_op_extension_mixed_legacy_and_new_apicustom_layer_example.xml:
custom_relu.cl:
And both versions of setting custom config for GPU (via Core::set_property() and CompiledModel c-tor worked fine and printed global ids.
I’d suggest checking IR after read_model() as I don’t see any issues with GPU extensibility mechanism so far. You can try to save the model via
ov::serialize(model, "ir.xml", "ir.bin");and check if your custom op has been generated correctly by FE extension. Also, if you’re able to build OV from sources, I’d suggest building it with-DENABLE_DEBUG_CAPS=ONcmake option and then running your app with OV_GPU_VERBOSE=2 environment variable. In the verbose logs you’re supposed to see something like this: