edgetpu: Compiler fails to compile int8 quantized TFLite Model
Description
I want to convert a int8 quantized tflite model:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
def representative_dataset_gen():
# Get sample input data as a numpy array in a method of your choosing.
for data in dataset:
# not sure why, for single input data use [data] for multiple inputs just data...
yield [data]
converter.representative_dataset = representative_dataset_gen
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.experimental_new_converter = True
tflite_model = converter.convert()
I can load the quantized tflite model without issues and run inference with it. But when I want to convert it the edgetpu compiler fails with many of these errors:
Edge TPU Compiler version 16.0.384591198
Searching for valid delegate with step 1
Try to compile segment with 136 ops
Started a compilation timeout timer of 180 seconds.
ERROR: :344 no_integer_overflow_from_quantization was not true.
ERROR: Node number 31 (CONV_2D) failed to prepare.
Compilation failed: Model failed in Tflite interpreter. Please ensure model can be loaded/run in Tflite interpreter.
Compilation child process completed within timeout period.
Compilation failed!
I assume this means that somewhere in the quantized model there are integers outside of int8 range. But that should not happen with the quantized tflite model, right? model_quant.tflite.zip
And thanks for adding more meaningful error messages with the new compiler update!
Click to expand!
Issue Type
Bug, Support
Operating System
Linux
Coral Device
No response
Other Devices
No response
Programming Language
Python 3.7
Relevant Log Output
No response
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 1
- Comments: 17 (7 by maintainers)
I have a similar problem with compilation. Not sure if it should be part of this thread or a separate thread, but it seems to be an issue with weight conversion to int8
My model compiles when I create it as long as I don’t load my Tensorflow weights:
edgetpu_compiler -a -s -t 1000 {save_path}returns:When pre-trained weights are not loaded, the log file shows the following layers:
Software versions: Tensorflow 2.8.0, Python 3.9.12
Hi, @j-o-d-o can you please try commenting out this line:
converter.experimental_new_converter = Trueif problem still persist, can you please share your saved model?