tensorflow: TFLite toco failed to conver quantized model ( mobilenet_v1_1.0_224 ) to tflite format
Describe the Problem
Firstly, I download the mobilenet_v1_1.0_224 model from ( http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224.tgz ) ;
Then, i used Command belown to get a quantized model ( mobilenet_v1_1.0_224_frozen_quantized_graph.pb ) successfully.
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=/tmp /mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_frozen.pb \
--inputs="input" \
--outputs="MobilenetV1/Predictions/Reshape_1" \
--out_graph=/tmp/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_frozen_quantized_graph.pb \
--transforms='add_default_attributes strip_unused_nodes(type=float, shape="1,224,224,3")
remove_nodes(op=Identity, op=CheckNumerics) fold_constants(ignore_errors=true)
fold_batch_norms fold_old_batch_norms quantize_weights quantize_nodes
strip_unused_nodes sort_by_execution_order'
However , when I used TFLite toco Command to convert .pb to .lite format but ERROR was output
TFLite toco Build Command:
bazel run --config=opt \
//tensorflow/contrib/lite/toco:toco -- \
--input_file=/tmp/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_frozen_quantized_graph.pb \
--output_file=/tmp/mobilenet_v1_1.0_224/mobilenet_v1_1.0_224_frozen_quantized_graph.lite \
--input_format=TENSORFLOW_GRAPHDEF \
--output_format=TFLITE \
--input_shapes=1,224,224,3 \
--mean_values=128 \
--std_values=128 \
--input_arrays="input" \
--output_arrays="MobilenetV1/Predictions/Reshape_1" \
--inference_type=QUANTIZED_UINT8 \
--default_ranges_min=0 \
--default_ranges_max=6
ERROR OUTPUT:
2018-05-21 17:32:50.603908: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:42] Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.
ERROR LOG:
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizedConv2D
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: RequantizationRange
……
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizedReshape
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizeV2
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: QuantizedReshape
: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1326] Converting unsupported operation: Dequantize
2018-05-21 17:32:50.581333: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 352 operators, 853 arrays (0 quantized)
2018-05-21 17:32:50.601042: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 352 operators, 853 arrays (0 quantized)
2018-05-21 17:32:50.603908: F tensorflow/contrib/lite/toco/graph_transformations/resolve_batch_normalization.cc:42] Check failed: IsConstantParameterArray(*model, bn_op->inputs[1]) && IsConstantParameterArray(*model, bn_op->inputs[2]) && IsConstantParameterArray(*model, bn_op->inputs[3]) Batch normalization resolution requires that mean, multiplier and offset arrays be constant.
About this issue
- Original URL
- State: closed
- Created 6 years ago
- Reactions: 5
- Comments: 17 (6 by maintainers)
I could able to convert FaceNet
.pb
to.tflite
model, and following are the instructions to do so:We will quantise pre-trained Facenet model with 512 embedding size. This model is about 95MB in size before quantization.
create a file
inference_graph.py
with following code:Run this file on pre-trained model, would generate model for inference. Download pre-trained model and unzip it to model_pre_trained/ directory. Make sure you have python ≥ 3.4 version.
FaceNet provides
freeze_graph.py
file, which we will use to freeze the inference model.Once the frozen model is generated, time to convert it to
.tflite
Let us check the quantized model size:
Interpeter code:
Interpeter output:
Hope this helps!
TFLite conversion can be done using SavedModel, i have given the link to the model below. This is as per the documentation here.
I have a model here, which is exported SavedModel using following code:
But while converting i am too getting the same error, in this case i am not freezing the model by myself. nor qualitizing it. Below code to convert SavedModel to TFLite:
Logs:
You might wanna read the TensorFlow guide on quantization to learn to use fake quantization technique instead of using the
transform_graph
to do direct quantization.It seems that link is gone. Add another link here.