tensorflow: [RNN] LSTM and Bidir layers can't be converted in a TFLite model
System information
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
- TensorFlow installed from (source or binary): binary (pip)
- TensorFlow version (or github SHA if from source): 2.2.0
Command used to run the converter or code if you’re using the Python API If absolutely needed I will upload some code on a Colab, please request it.
#representative dataset
input_ds = tf.data.Dataset.from_tensor_slices(tf.convert_to_tensor(X_train, dtype=np.float32))
def representative_data_gen():
for input_value in input_ds.take(1000).batch(1):
yield [input_value]
converter = tf.lite.TFLiteConverter.from_keras_model(m["k_model"])
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
converter.representative_dataset = representative_data_gen
print("quantization model conversion started")
%time m["k_model_tflite"] = converter.convert()
print("quantization model conversion completed")
tflite_model_file = 'current_converted_model.tflite'
f = open(tflite_model_file, 'wb')
f.write(m["k_model_tflite"])
f.close()
print("quantization model saved on file")
output:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
RuntimeError: Only models with a single subgraph are supported, model had 9 subgraphs
The above exception was the direct cause of the following exception:
SystemError Traceback (most recent call last)
c:\users\eric\virtualenvs\venvgpu\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py in __init__(self, model_content)
50 self._calibrator = (_calibration_wrapper.CalibrationWrapper
---> 51 .CreateWrapperCPPFromBuffer(model_content))
52 except Exception as e:
SystemError: <built-in function CalibrationWrapper_CreateWrapperCPPFromBuffer> returned a result with an error set
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<timed exec> in <module>
c:\users\eric\virtualenvs\venvgpu\lib\site-packages\tensorflow\lite\python\lite.py in convert(self)
520 if self._is_calibration_quantize():
521 result = self._calibrate_quantize_model(
--> 522 result, constants.FLOAT, constants.FLOAT)
523
524 return result
c:\users\eric\virtualenvs\venvgpu\lib\site-packages\tensorflow\lite\python\lite.py in _calibrate_quantize_model(self, result, inference_input_type, inference_output_type)
259 inference_output_type):
260 allow_float = not self._is_int8_target_required()
--> 261 calibrate_quantize = _calibrator.Calibrator(result)
262 if self._experimental_calibrate_only:
263 return calibrate_quantize.calibrate(self.representative_dataset.input_gen)
c:\users\eric\virtualenvs\venvgpu\lib\site-packages\tensorflow\lite\python\optimize\calibrator.py in __init__(self, model_content)
51 .CreateWrapperCPPFromBuffer(model_content))
52 except Exception as e:
---> 53 raise ValueError("Failed to parse the model: %s." % e)
54 if not self._calibrator:
55 raise ValueError("Failed to parse the model.")
ValueError: Failed to parse the model: <built-in function CalibrationWrapper_CreateWrapperCPPFromBuffer> returned a result with an error set.
model is quite large and not included please indicated if is needed.
Failure details When the Bidir and LSTM layers are removed the conversion works without error. When the Bidir and LSTM layers are present the above error is presented.
according to #36219 is seems that this type of error coudl be raised when using TFLite for microcontroller, this is not the intention here. Is there something to do to ensure that it is not the version for microcontroller which is used. For context, the target is the coral dev board (edge-tpu on linux platform).
Any other info / logs model summary that I tried to convert.
Layer (type) Output Shape Param # Connected to
==================================================================================================
inputs (InputLayer) [(None, 250, 1)] 0
__________________________________________________________________________________________________
bidirectional (Bidirectional) (None, 250, 100) 20800 inputs[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 25000) 0 bidirectional[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 250) 0 inputs[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 25250) 0 flatten[0][0]
flatten_1[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 2048) 51714048 concatenate[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 1024) 2098176 dense[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 512) 524800 dense_1[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 2048) 51714048 concatenate[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 256) 131328 dense_2[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 1024) 2098176 dense_7[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 128) 32896 dense_3[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 512) 524800 dense_8[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 50) 6450 dense_4[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 256) 131328 dense_9[0][0]
__________________________________________________________________________________________________
reshape (Reshape) (None, 50, 1) 0 dense_5[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 128) 32896 dense_10[0][0]
__________________________________________________________________________________________________
time_distributed (TimeDistribut (None, 50, 1) 2 reshape[0][0]
__________________________________________________________________________________________________
dense_12 (Dense) (None, 50) 6450 dense_11[0][0]
__________________________________________________________________________________________________
om (Reshape) (None, 50, 1) 0 time_distributed[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 50, 1) 0 dense_12[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 50, 2) 0 om[0][0]
reshape_1[0][0]
__________________________________________________________________________________________________
flatten_2 (Flatten) (None, 100) 0 concatenate_1[0][0]
__________________________________________________________________________________________________
dense_13 (Dense) (None, 100) 10100 flatten_2[0][0]
__________________________________________________________________________________________________
dense_14 (Dense) (None, 100) 10100 dense_13[0][0]
__________________________________________________________________________________________________
dense_15 (Dense) (None, 50) 5050 dense_14[0][0]
__________________________________________________________________________________________________
dense_16 (Dense) (None, 50) 2550 dense_15[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 50, 1) 0 dense_16[0][0]
__________________________________________________________________________________________________
time_distributed_1 (TimeDistrib (None, 50, 1) 2 reshape_2[0][0]
__________________________________________________________________________________________________
of (Reshape) (None, 50, 1) 0 time_distributed_1[0][0]
==================================================================================================
Total params: 109,064,000
Trainable params: 109,064,000
Non-trainable params: 0
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 35 (16 by maintainers)
@ericqu , That was wrongly triggered message, please ignore.