tensorflow: Model converts to TFlite but invocation fails

System information

  • OS Platform: Windows, Linux:
  • TensorFlow 2.4.1:

Standalone code to reproduce the issue https://colab.research.google.com/drive/1f22ow0a4p1WQLdlm7AIZ0V3DHbxI8EjG

or if you like. here is the same code as in Google Colab to reproduce the problem

from tensorflow.keras.layers import concatenate, Input, LSTM, Bidirectional, Embedding, Dense, TimeDistributed,SpatialDropout1D
from tensorflow.keras.models import Model
import tensorflow as tf
print(tf.__version__)

# Create Tensorflow model 
word_in = Input(shape=(300,), name="input_wor")
emb_wor = Embedding(input_dim=1834, output_dim=16, input_length=300, mask_zero=True, name="emb_wor")(word_in)
char_in = Input(shape=(300, 20 ,), name="input_char")
emb_char = TimeDistributed(Embedding(input_dim=132, output_dim=32, input_length=20, mask_zero=True, name="emb_char"))(char_in)
char_enc = TimeDistributed(LSTM(units=32, return_sequences=False, recurrent_dropout=0.15, name="char_enc"))(emb_char)
input_pos = Input(shape=(300, 4, ), name="input_pos")
input_par = Input(shape=(300, 3, ), name="input_par")

x = concatenate([emb_wor, char_enc, input_pos, input_par])
x = SpatialDropout1D(0.1)(x)
main_lstm = Bidirectional(LSTM(units=64, return_sequences=True, dropout=0., recurrent_dropout=0.1, name="main_lstm"))(x)
inputs=[word_in,char_in, input_pos, input_par]
outputs = TimeDistributed(Dense(4, activation="softmax", name="out"))(main_lstm)
model = Model(inputs=inputs, outputs=outputs)
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
print(model.summary())


# Convert Model to Tensorflow Lite
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
  tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
  tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
converter.experimental_new_converter = True
tflite_model = converter.convert()
with open("model.tflite", 'wb') as f:
  f.write(tflite_model)


# # Install tflite_runtime
# !pip3 install --extra-index-url https://google-coral.github.io/py-repo/ tflite_runtime

use_tflite_runtime = False # If True then you need to first restart runtime before running this code
import numpy as np

if(use_tflite_runtime):
  import tflite_runtime.interpreter as tflite
  interpreter =  tflite.Interpreter(model_path="model.tflite")
else:
  import tensorflow as tf
  interpreter = tf.lite.Interpreter(model_path="model.tflite")

interpreter.allocate_tensors()

input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

print("input_details", input_details)
print("output_details", output_details)

# Set random values
for i in range(len(input_details)):
    x = np.random.random(input_details[i]["shape"])
    interpreter.set_tensor(i, x.astype(input_details[i]["dtype"]))
    print(i, input_details[i]["name"], input_details[i]["shape"], input_details[i]["dtype"], "/", x.shape)

# Invoke
interpreter.invoke()

Provide the text output from tflite_convert

2021-04-12 13:24:02.507539: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
2021-04-12 13:24:07.691525: I tensorflow/core/grappler/devices.cc:69] Number of eligible GPUs (core count >= 8, compute capability >= 0.0): 0
2021-04-12 13:24:07.691860: I tensorflow/core/grappler/clusters/single_machine.cc:357] Starting new session
2021-04-12 13:24:07.730306: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:1144] Optimization results for grappler item: graph_to_optimize
  function_optimizer: Graph size after: 447 nodes (0), 564 edges (0), time = 6.973ms.
  function_optimizer: Graph size after: 447 nodes (0), 564 edges (0), time = 6.713ms.
Optimization results for grappler item: model_bidirectional_forward_main_lstm_while_body_19625
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: model_bidirectional_backward_main_lstm_while_cond_19891
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: model_time_distributed_1_char_enc_while_body_19340
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: model_bidirectional_forward_main_lstm_while_cond_19624
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: model_time_distributed_1_char_enc_while_cond_19339
  function_optimizer: function_optimizer did nothing. time = 0ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.
Optimization results for grappler item: model_bidirectional_backward_main_lstm_while_body_19892
  function_optimizer: function_optimizer did nothing. time = 0.001ms.
  function_optimizer: function_optimizer did nothing. time = 0ms.

2021-04-12 13:24:08.058077: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:345] Ignored output_format.
2021-04-12 13:24:08.058217: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:348] Ignored drop_control_dependency.
2021-04-12 13:24:08.086587: I tensorflow/compiler/mlir/tensorflow/utils/dump_mlir_util.cc:210] disabling MLIR crash reproducer, set env var `MLIR_CRASH_REPRODUCER_DIRECTORY` to enable.
2021-04-12 13:24:08.172281: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1782] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following flex op(s):
Flex ops: FlexAll
Details:
	tf.All {device = "", keep_dims = false}

When I run the invocation using tflite_runtime i get this error:

RuntimeError: Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.Node number 16 (FlexAll) failed to prepare.

When I run the invocation using tensorflow i get this following error

RuntimeError: external/org_tensorflow/tensorflow/lite/kernels/concatenation.cc:76 t->dims->data[d] != t0->dims->data[d] (300 != 1)Node number 37 (CONCATENATION) failed to prepare.
Node number 49 (WHILE) failed to invoke.

Either way, the invocation fails and it seems that is has something to do with the concatenate layer. I would highly appreciate an answer or eventually a solution. As you can see the model does convert, but the invocation doesn’t run. I tested it on both Windows and Linux,. same problem and same error.

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 26 (15 by maintainers)

Most upvoted comments

@TimbusCalin , try this:

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS]
converter._experimental_lower_tensor_list_ops = False

https://github.com/tensorflow/tensorflow/commit/904b3926ed1c6c70380d5313d282d248a776baa1 is the fix. I can expect that the tomorrow’s tf-nightly version will have it.