PINTO_model_zoo: Model conversion error
After running download.sh, I am trying to convert face_detection_front.pb to quantized tflite model. I have tried all scripts inside 30_BlazeFace/01_float32 directory but all gets failed with the following error:
ValueError: This converter can only convert a single ConcreteFunction. Converting multiple functions is under development.
I am using TensorFlow 2.2.0 on MacOS. Also, tried with a Linux machine with 2.1.0 and 2.2.0.
NOTE: I am trying to rebuild quantized model to run on microcontroller. The quantized model for the blazeface provided in your repo download.sh has error while running on the microcontroller: “Didn’t find op for builtin opcode ‘CONV_2D’ version ‘3’” or “Didn’t find op for builtin opcode ‘QUANTIZE’ version ‘2’”.
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 33 (16 by maintainers)
As per documentation pruning may not reduce the RAM requirement of the model, it will only reduce the model disk size (compression). It would be interesting to see if the size can be reduced further after you will change your script to TF v2. Thanks for your time!
Also I am using TF Lite Micro library generated by TF 2.2.0 branch. May be it is TF Lite Micro issue.