PINTO_model_zoo: Movenet: error on loading model with Openvino
1. OS Ubuntu 18.04
2. OS Architecture x86_64
3. Version of OpenVINO 2021.3.394
9. Movenet from your model zoo
Ha ha it’s me again 😉 I saw you have already converted Movenet ! Naturally I wanted to give it a try. I get this error message when loading the ‘lightning’ (or ‘thunder’) model:
openvino@ubuntu:/workdir$ python3 MovenetOpenvino.py -m lightning
Video FPS: 30
Loading Inference Engine
Device info:
CPU
MKLDNNPlugin version ......... 2.1
Build ........... 2021.3.0-2787-60059f2c755-releases/2021/3
Pose Detection model - Reading network files:
/workdir/models/movenet_lightning_FP32.xml
/workdir/models/movenet_lightning_FP32.bin
Traceback (most recent call last):
File "MovenetOpenvino.py", line 569, in <module>
output=args.output)
File "MovenetOpenvino.py", line 99, in __init__
self.load_model(xml, device)
File "MovenetOpenvino.py", line 131, in load_model
self.pd_net = self.ie.read_network(model=xml_path, weights=bin_path)
File "ie_api.pyx", line 293, in openvino.inference_engine.ie_api.IECore.read_network
File "ie_api.pyx", line 315, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: Check 'element::Type::merge(inputs_et, inputs_et, get_input_element_type(i))' failed at core/src/op/concat.cpp:62:
While validating node 'v0::Concat Concat_1866 (stack_2_StatefulPartitionedCall/stack_2_1/Unsqueeze/Output_0/Data__const[0]:i32{1,1}, stack_2_StatefulPartitionedCall/stack_2_1/Unsqueeze503[0]:i64{1,1}, stack_2_StatefulPartitionedCall/stack_2_1/Unsqueeze505[0]:i64{1,1}) -> ()' with friendly_name 'Concat_1866':
Argument element types are inconsistent.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 88 (37 by maintainers)
I see. I understood everything.
Done.
Hi @gespona, @PINTO0309 has done the conversion of the models but there is still a small problem with the output values. PINTO is working on it. Just be patient.
Your trick is even smarter than what I initially thought. So we were lucky there was already a transpose layer with the same transpose order in the model. For thunder, I didn’t even need to change the layers id (same as for lightning).
Ah it was not obvious 😃
Good night Katsuya !
Ah ah now I need to compare the 2 xml files to understand your modification. And I will try myself on thunder 😃
Sorry for the delay. It works ! You are a genius ! (I already know you are, it is just a confirmation)
I would choose the Pattern1.From my understanding, the depthai framework is able to automatically translate [C, H, W] (= output of ImageManip node) to [1, C, H, W] (=input of NeuralNet node).
I forced UINT8 and now much better. Starting to get the landmarks. Will share some results later. Thanks.
@gespona If you don’t have the error anymore, you are probably close to make it work. But I had a second thought during the lunch 😃 If my understanding is correct, what type of input (UINT8 or FP16) the neural net is waiting for depends on how the blob was compiled. @PINTO0309 has put the command used in a post above:
Because we don’t specify explicitly the type, I guess the compile tool uses the type used in the IR (FP16). We should force UINT8 by adding
-ip U8
in the command above. This will make our life easier (no need of setFp16()) and when we will transmit images from the host to the OAK, we will transmit twice less data on the USB. I cannot test it now because I have to leave for a few hours. But I will test later on. Anyway, are you on the luxonis discord ? We should continue the discussion there to not pollute this thread .@PINTO0309 FYI, I have created a repo for the openvino version (WIP) : https://github.com/geaxgx/openvino_movenet Thank you.
The image is padded with black stripes on top and bottom to make it square.
This image is not a pose that a normal human being can take, so it is no wonder that it is wrong. haha.
Yes. That’s right. There are advantages and disadvantages, but it is less work. As long as the file name is the same, Google Drive will not change the URL.
Don’t worry about it. OpenVINO has always had a lot of bugs. 😄 The recommitment is finished.
Thank you very much ! And sorry again for me using on old version of openvino.
😿 OK, I’ll re-commit the model with only the align_corner modified.
Ligthning with Openvino on CPU:
Thunder with openvino on Myriad:
The skeleton is shifted.
Thunder with openvino on CPU :
@gespona Earlier in this issue, I mentioned that I changed the type of the input from INT32 to FLOAT32 to quantize the model. Use Netron, a web site that allows you to visualize the structure of your model, to see the structure expected of the input. I am not very familiar with what the error message means, but the caveat is that I am converting the
Float16 (FP16)
model to Blob. I made a simple prediction because the offset value in the error message is double the value. A float16 precision model tagged withmodel_float32
. I have customized the model to be optimized for Float32 and Float16, INT8, so please ignore the description on the model card. INT32 is an unwieldy and unfriendly model for many users of the model. https://netron.app/The conversion command I used is below. I specified FP16 for the conversion, but will the result be the same if I reconvert using FP32? If there is a problem, it is a problem beyond my control.
@geaxgx I downloaded the latest Google Drive file again this morning, which I thought I had uploaded last night, and tested it again in a separate working folder to make sure there were no mistakes.
test.png
test_onnx.py
ONNX and OpenVINO IR and Myriad Inference Engine Blobs have been updated. I leave the rest of the verification to @geaxgx and @gespona.
When converting from tflite to ONNX, the values seem to shift slightly. I’m going to bed. Good night.
Ha ha enjoy the cigarette ! Thanks again. I don’t know how we would do if you did not exist 😃
Sorry if I sounded impatient … Ofc we’re not in hurry. Thanks a lot for all this amazing work 😃
OpenVINO FP16
OpenVINO FP32
This is the result of reasoning with a tflite model that I just changed the input type to Float32 before converting to ONNX.
I need to do some research to see if the problem came up when I converted to ONNX.
I tried to reconvert lightning using a special trick.
You are the 100th issue contributor to be celebrated. 😅
I was aware of the problem you pointed out while I was converting the model while eating lunch. In fact, I have also identified a way to solve that part of the problem. However, after this problem is solved, another major problem arises that cannot be helped.
TensorFlow’s
FloorDiv
operation cannot be handled correctly by OpenVINO.This is a known issue that only I see as a problem.