tensorflow: TensorFlow Lite fails to convert LSTM after upgrading from 2.6.2 to 2.7.0.

System information

  • Have I written custom code: yes
  • OS Platform and Distribution: Linux Ubuntu 16.04 (TensorFlow official docker images)
  • TensorFlow installed from binary
  • TensorFlow version : 2.7.0
  • Python version: 3.8.10
  • Exact command to reproduce:

import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Embedding, LSTM tf.version.VERSION model_in = Input(shape=(800,)) model = Model(model_in, LSTM(8)(Embedding(300, 8,)(model_in))) converter = tf.lite.TFLiteConverter.from_keras_model(model); tflite_model = converter.convert()

Describe the problem

Tensorflow fails to convert the above model into TensorFlow lite, opposed to how it worked up to this version.

Source code / logs

The example below shows how it used to work on version 2.6.0 and how it works now. (it was also working fine for 2.6.2)

docker run -it --rm --name tf36 tensorflow/tensorflow:2.6.0 python Unable to find image ‘tensorflow/tensorflow:2.6.0’ locally 2.6.0: Pulling from tensorflow/tensorflow feac53061382: Already exists beba0652e867: Already exists c5060c8118ce: Already exists bfc0178fb9ad: Already exists 18fb3f957dc0: Already exists cd5d06d0987e: Already exists 7ed4f7cde30b: Already exists 6bda0595411c: Already exists Digest: sha256:773d5ce09e4ce003db02740c6a372a8a9f43be2bac23544d8f452bfec5347c53 Status: Downloaded newer image for tensorflow/tensorflow:2.6.0 Python 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0] on linux Type “help”, “copyright”, “credits” or “license” for more information.

import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Embedding, LSTM tf.version.VERSION ‘2.6.0’ model_in = Input(shape=(800,)) model = Model(model_in, LSTM(8)(Embedding(300, 8,)(model_in))) 2021-11-17 18:10:51.093603: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. converter = tf.lite.TFLiteConverter.from_keras_model(model); tflite_model = converter.convert() WARNING:tensorflow:Compiled the loaded model, but the compiled metrics have yet to be built. model.compile_metrics will be empty until you train or evaluate the model. 2021-11-17 18:10:59.494222: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. WARNING:absl:Found untraced functions such as lstm_cell_layer_call_fn, lstm_cell_layer_call_and_return_conditional_losses, lstm_cell_layer_call_fn, lstm_cell_layer_call_and_return_conditional_losses, lstm_cell_layer_call_and_return_conditional_losses while saving (showing 5 of 5). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmp/tmpg9eczqrl/assets INFO:tensorflow:Assets written to: /tmp/tmpg9eczqrl/assets 2021-11-17 18:11:04.174113: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:351] Ignored output_format. 2021-11-17 18:11:04.174165: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:354] Ignored drop_control_dependency. 2021-11-17 18:11:04.175758: I tensorflow/cc/saved_model/reader.cc:38] Reading SavedModel from: /tmp/tmpg9eczqrl 2021-11-17 18:11:04.203946: I tensorflow/cc/saved_model/reader.cc:90] Reading meta graph with tags { serve } 2021-11-17 18:11:04.203996: I tensorflow/cc/saved_model/reader.cc:132] Reading SavedModel debug info (if present) from: /tmp/tmpg9eczqrl 2021-11-17 18:11:04.287237: I tensorflow/cc/saved_model/loader.cc:211] Restoring SavedModel bundle. 2021-11-17 18:11:04.364667: I tensorflow/cc/saved_model/loader.cc:195] Running initialization op on SavedModel bundle at path: /tmp/tmpg9eczqrl 2021-11-17 18:11:04.412505: I tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 236769 microseconds. 2021-11-17 18:11:04.589284: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var MLIR_CRASH_REPRODUCER_DIRECTORY to enable.

docker run -it --rm --name tf37 tensorflow/tensorflow:2.7.0 python Python 3.8.10 (default, Sep 28 2021, 16:10:42) [GCC 9.3.0] on linux Type “help”, “copyright”, “credits” or “license” for more information.

import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Input, Embedding, LSTM tf.version.VERSION ‘2.7.0’ model_in = Input(shape=(800,)) model = Model(model_in, LSTM(8)(Embedding(300, 8,)(model_in))) 2021-11-17 18:14:09.592797: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. converter = tf.lite.TFLiteConverter.from_keras_model(model); tflite_model = converter.convert() 2021-11-17 18:14:18.728660: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. WARNING:absl:Found untraced functions such as lstm_cell_layer_call_fn, lstm_cell_layer_call_and_return_conditional_losses, lstm_cell_layer_call_fn, lstm_cell_layer_call_and_return_conditional_losses, lstm_cell_layer_call_and_return_conditional_losses while saving (showing 5 of 5). These functions will not be directly callable after loading. INFO:tensorflow:Assets written to: /tmp/tmpixyhgbqk/assets INFO:tensorflow:Assets written to: /tmp/tmpixyhgbqk/assets 2021-11-17 18:14:24.806528: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:363] Ignored output_format. 2021-11-17 18:14:24.806615: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:366] Ignored drop_control_dependency. 2021-11-17 18:14:24.808294: I tensorflow/cc/saved_model/reader.cc:43] Reading SavedModel from: /tmp/tmpixyhgbqk 2021-11-17 18:14:24.823489: I tensorflow/cc/saved_model/reader.cc:107] Reading meta graph with tags { serve } 2021-11-17 18:14:24.823535: I tensorflow/cc/saved_model/reader.cc:148] Reading SavedModel debug info (if present) from: /tmp/tmpixyhgbqk 2021-11-17 18:14:24.898508: I tensorflow/cc/saved_model/loader.cc:210] Restoring SavedModel bundle. 2021-11-17 18:14:25.002491: I tensorflow/cc/saved_model/loader.cc:194] Running initialization op on SavedModel bundle at path: /tmp/tmpixyhgbqk 2021-11-17 18:14:25.091938: I tensorflow/cc/saved_model/loader.cc:283] SavedModel load for tags { serve }; Status: success: OK. Took 283650 microseconds. 2021-11-17 18:14:25.358785: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:237] disabling MLIR crash reproducer, set env var MLIR_CRASH_REPRODUCER_DIRECTORY to enable. loc(callsite(callsite(callsite(callsite(“TensorArrayV2_1@__inference_standard_lstm_654”(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1315:0) at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1280:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1080:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”:216:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1150:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1170:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:761:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:775:0 at “<stdin>”:1:0)))))))) at callsite(“model/lstm/PartitionedCall@__inference__wrapped_model_927”(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1315:0) at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1280:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1080:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”:216:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1150:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1170:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:761:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:775:0 at “<stdin>”:1:0))))))))) at callsite(“StatefulPartitionedCall@__inference_signature_wrapper_2825”(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1315:0) at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1280:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1080:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”:216:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1150:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1170:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:761:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:775:0 at “<stdin>”:1:0))))))))) at “StatefulPartitionedCall”)): error: ‘tf.TensorListReserve’ op requires element_shape to be static during TF Lite transformation pass loc(callsite(callsite(callsite(callsite(“TensorArrayV2_1@__inference_standard_lstm_654”(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1315:0) at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1280:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1080:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”:216:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1150:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1170:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:761:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:775:0 at “<stdin>”:1:0)))))))) at callsite(“model/lstm/PartitionedCall@__inference__wrapped_model_927”(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1315:0) at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1280:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1080:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”:216:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1150:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1170:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:761:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:775:0 at “<stdin>”:1:0))))))))) at callsite(“StatefulPartitionedCall@__inference_signature_wrapper_2825”(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1315:0) at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py”:1280:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1080:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”:216:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1150:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:1170:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:761:0 at callsite(“/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”:775:0 at “<stdin>”:1:0))))))))) at “StatefulPartitionedCall”)): error: failed to legalize operation ‘tf.TensorListReserve’ that was explicitly marked illegal error: Lowering tensor list ops is failed. Please consider using Select TF ops and disabling _experimental_lower_tensor_list_ops flag in the TFLite converter object. For example, converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]\n converter._experimental_lower_tensor_list_ops = False Traceback (most recent call last): File “<stdin>”, line 1, in <module> File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”, line 775, in wrapper return self._convert_and_export_metrics(convert_func, *args, **kwargs) File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”, line 761, in _convert_and_export_metrics result = convert_func(self, *args, **kwargs) File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”, line 1170, in convert saved_model_convert_result = self._convert_as_saved_model() File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”, line 1152, in _convert_as_saved_model return super(TFLiteKerasModelConverterV2, File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/lite.py”, line 945, in convert result = _toco_convert_impl( File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”, line 223, in wrapper raise converter_error from None # Re-throws the exception. File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert_phase.py”, line 216, in wrapper return func(*args, **kwargs) File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py”, line 821, in toco_convert_impl data = toco_convert_protos( File “/usr/local/lib/python3.8/dist-packages/tensorflow/lite/python/convert.py”, line 315, in toco_convert_protos raise converter_error tensorflow.lite.python.convert_phase.ConverterError: /usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py:1315:0: error: ‘tf.TensorListReserve’ op requires element_shape to be static during TF Lite transformation pass <unknown>:0: note: loc(“StatefulPartitionedCall”): called from /usr/local/lib/python3.8/dist-packages/tensorflow/python/saved_model/save.py:1315:0: error: failed to legalize operation ‘tf.TensorListReserve’ that was explicitly marked illegal <unknown>:0: note: loc(“StatefulPartitionedCall”): called from <unknown>:0: error: Lowering tensor list ops is failed. Please consider using Select TF ops and disabling _experimental_lower_tensor_list_ops flag in the TFLite converter object. For example, converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]\n converter._experimental_lower_tensor_list_ops = False

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Reactions: 1
  • Comments: 39 (10 by maintainers)

Most upvoted comments

I think this issue should be reopened. I’m still seeing the failure in version 2.11.0-dev20220916.

It seems like converter._experimental_default_to_single_batch_in_tensor_list_ops = True removes the error, but since this flag is not documented, I would not count this as a fix.

Some things that would be useful for devs to know:

  • Are there any downsides to using that flag?
  • Why isn’t this the default behavior of TF? It seems like “vanilla” LSTMs are supposed to be supported out of the box.
  • As @andreped mentioned, can we apply this fix when using the CLI?

Hi @mohantym , sorry for the late reply. I think whether SELECT_TF_OPS resolves the issue depends on the use case. Some compiler backends might only support the builtin TFLite ops, wheres SELECT_TF_OPS enables other operations beyond those. In my opinion, batch_size=1 is a more general solution because it only requires the builtin TFLite ops.

Reference: https://www.tensorflow.org/lite/guide/ops_compatibility

I know that TFLite sometimes supports dynamic batch sizes, but it seems like this does not currently work for LSTM. Perhaps TF could raise a more informative error when converting LSTMs with dynamic batch sizes?

@gaonmaor You can add the parameter _experimental_default_to_single_batch_in_tensor_list_ops, when tensorflow==2.7.0. converter._experimental_default_to_single_batch_in_tensor_list_ops = True

Just tried @mohantym gist, and it works, even in TF 2.9.1. What seems to be the fix is this: converter._experimental_default_to_single_batch_in_tensor_list_ops = True

So yes, this issue can likely be closed.

However, I don’t see a simple way to do this through the CLI: tflite_convert -h

Would be great if there existed a solution for the CLI as well. Any ideas?

Any update? Just observed the same using tensorflow==2.9.1 and python 3.7.9 on Win10. Will try to downgrade to 2.6.2 to see if it works there.


EDIT: Successfully converted a RNN model to tflite using 2.6.2. Had to save it in H5 format, and not SavedModel, but at least now conversion was successful.

@jvishnuvardhan Was able to replicate the issue on colab using TF v2.7.0 ,please find the gist here for reference. Thanks!

Hi @gaonmaor @andreped ! Was able to convert the above toy model to TFlite using fusion lab instruction and @JinXiaozhao’s suggested flag(without adding select ops syntax) Attached gist for reference. Can we consider this as resolved now ? Thank you!