tensorflow: [RNN]Failed to do full integer quantization and got error: Failed to parse the model: pybind11::init(): factory function returned nullptr.
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 10.15.3
- TensorFlow installed from (source or binary): binary
- TensorFlow version (or github SHA if from source): tf-nightly
Command used to run the converter or code if you’re using the Python API If possible, please share a link to Colab/Jupyter/any notebook.
import tensorflow as tf
import numpy as np
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.LSTM(256, input_shape=(60, 388), activation='tanh',
return_sequences=True))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Dense(388, activation='softmax'))
def representative_data_gen():
for input_value in test_ds.take(100):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.representative_dataset = representative_data_gen
tflite_model_quant = converter.convert()
The output from the converter invocation
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/tensorflow/lite/python/optimize/calibrator.py", line 51, in __init__
_calibration_wrapper.CalibrationWrapper(model_content))
TypeError: pybind11::init(): factory function returned nullptr
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/lisichao/PycharmProjects/KerasSavedModel/mnistExample.py", line 155, in <module>
tflite_model_quant = converter.convert()
File "/usr/local/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 611, in convert
constants.FLOAT, True)
File "/usr/local/lib/python3.7/site-packages/tensorflow/lite/python/lite.py", line 316, in _calibrate_quantize_model
calibrate_quantize = _calibrator.Calibrator(result)
File "/usr/local/lib/python3.7/site-packages/tensorflow/lite/python/optimize/calibrator.py", line 53, in __init__
raise ValueError("Failed to parse the model: %s." % e)
ValueError: Failed to parse the model: pybind11::init(): factory function returned nullptr.
Failure details When I convert the TensorFlow Model without optimization, it works. When I do post-training integer quantization without a data representative, it works. When I provide a data representative, it fails. The error can be reproduced from the following file. ReproduceTheError.zip
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Reactions: 5
- Comments: 28 (4 by maintainers)
I had the same error. I was able to get around it by setting
unroll=Truefor the LSTM.It is strange that, according to this talk, RNN/LSTM is natively supported in TF2.x, without any change to the model.
Same error with 2.4.0-dev20201021 on CenterNet + MobileNet v2
I’m also getting this error with an object_detection model. Since it is a community-driven project, I don’t know whether this is a TF bug or something in the model that TF doesn’t support yet.