tensorflow: ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr. when convert and quantize tf model
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 18.04
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on a mobile device:
- TensorFlow installed from (source or binary): source
- TensorFlow version (use command below):
import tensorflow.compat.v1 as tf tf.__version__ 2.4.1 - Python version: Python 3.6.9
- Bazel version (if compiling from source):
- GCC/Compiler version (if compiling from source):
- CUDA/cuDNN version:
- GPU model and memory:
- Exact command to reproduce:
Describe the problem
I am trying to covert a Feature Extraction model that used in deepSort tracking to a int8 quantized tflite model, I am following the post-training quantization but failed with an error. Actually, no matter what content in representative_data_gen() function, the error is same as in Traceback below:
Source code / logs
Here is the code that converting the frozen-graph to tflite:
import tensorflow.compat.v1 as tf
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = train_images.astype(np.float32) / 255.0
def representative_data_gen():
for input_value in tf.data.Dataset.from_tensor_slices(train_images).batch(1).take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_frozen_graph("mars-small128.pb",input_arrays=["Cast"],output_arrays=["features"],input_shapes={"Cast":[1, 128, 64, 3]})
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)
Traceback (most recent call last): File “/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/optimize/calibrator.py”, line 58, in init _calibration_wrapper.CalibrationWrapper(model_content)) TypeError: pybind11::init(): factory function returned nullptr
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File “detect.py”, line 219, in <module> main() File “detect.py”, line 109, in main encoder = generate_detections.create_box_encoder(“mars-small128.pb”, batch_size = 32) File “/home/dev/projects/coral/examples-camera/opencv/generate_detections.py”, line 191, in create_box_encoder image_encoder = ImageEncoder(model_filename, input_name, output_name) File “/home/dev/projects/coral/examples-camera/opencv/generate_detections.py”, line 150, in init tflite_model = converter.convert() File “/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py”, line 1947, in convert return super(TFLiteConverter, self).convert() File “/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py”, line 1313, in convert result = self._calibrate_quantize_model(result, **flags) File “/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/lite.py”, line 449, in _calibrate_quantize_model calibrate_quantize = _calibrator.Calibrator(result) File “/home/dev/.local/lib/python3.6/site-packages/tensorflow/lite/python/optimize/calibrator.py”, line 60, in init raise ValueError(“Failed to parse the model: %s.” % e) ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr.
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 22 (11 by maintainers)
@jianlijianli, @JiashuGuo, I recently tried TFLite’s quantization methods, I was able to follow Post-training float16 quantization and Post-training dynamic range quantization methods, convert my model and able to infer. However when I tried Post-training integer quantization, it gives: ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr.
How do I confirm that my model has more than 1 subgraphs? Also is it normal that only Post-training integer quantization fails and gives this error even if my model has more than 1 subgraphs? Thanks.
Building
LoggingInterpreterfailed with following error.A fix would be desired to correctly pass the error message to Python world.