tensorflow: RuntimeError: Inputs and outputs not all float|uint8|int16 types.Node number 2 (ADD) failed to invoke.

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (or github SHA if from source): 1.15

Command used to run the converter or code if you’re using the Python API If possible, please share a link to Colab/Jupyter/any notebook.

import numpy as np
import tensorflow as tf
from PIL import Image

interpreter = tf.lite.Interpreter(model_path="out.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
print(input_details)
[{'name': 'image', 'index': 21904, 'shape': array([  3, 270, 480], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
print(output_details)
[{'name': 'action', 'index': 7204, 'shape': array([], dtype=int32), 'dtype': <class 'numpy.float32'>, 'quantization': (0.0, 0)}]
input_shape = input_details[0]['shape']
input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)

The output from the converter invocation

RuntimeError: Inputs and outputs not all float|uint8|int16 types.Node number 2 (ADD) failed to invoke.

Also, please include a link to the saved model or GraphDef

https://we.tl/t-lWH3XmYihS <-- .pb
https://we.tl/t-Bkid4ThzN1 <-- .tflite

Failure details The conversion is successful and I can run inference on the .pb, but I get the following error when I run inference on the tflite file.

RuntimeError: Inputs and outputs not all float|uint8|int16 types.Node number 2 (ADD) failed to invoke.

Any other info / logs

Model trained on pytorch mobilenetV2 arch, converted to onnx, and converted to tensorflow.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 37 (4 by maintainers)

Most upvoted comments

This is the code that works for me on tensorflow 2.1.0 CPU Version, with onnx-tf built from master branch.

graph_def = tf.compat.v1.GraphDef()
graph_def.ParseFromString(open(tf_model_path, 'rb').read())
concrete_func = wrap_frozen_graph(graph_def, inputs=input_tensor, outputs=["scores:0", "boxes:0"])
converter = tf.lite.TFLiteConverter.from_concrete_functions([concrete_func])
converter.experimental_new_converter = True
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8, tf.lite.OpsSet.SELECT_TF_OPS]
tflite_model = converter.convert()

I am using the MLIR convertor and still see this issue.

@jaggernaut007 Try this notebook, or the file provided by @JimKimHome . The new MLIR based model convertor could avoid this bug.

I’m still trying to get the wrap_frozen_graph() working for my case.

@codeislife99 Thanks for the tip above! Using the wrap_frozen_graph worked for me,too.I use the same tensorflow version as your’s, and conver the .pb model to .tflite model and load it in andriod, the interpreter can do the inference now! At first I find that the wrap_frozen_graph func can not be recognized ,then I find it on ‘https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/’ ,and I worked out. The full edition of the convert code is uploaded. pb_to_tflite.txt