tensorflow: (CONCATENATION) failed to prepare

1. System information

  • OS Platform and Distribution: MacOS Monterey 12.4
  • TensorFlow installation (pip package or built from source): pip package
  • TensorFlow library (version, if pip package or github SHA, if built from source): 2.9.1
  • Python: 3.7.10

2. Code

Provide code to help us reproduce your issues using one of the following options: converted checkpoint: https://drive.google.com/file/d/1iWPcJ3wC2xV-xz3lIiGQrgteGXZOLt5P/view?usp=sharing

interpreter = tf.lite.Interpreter(model_path=tflite_path)
interpreter.allocate_tensors()
input_data = np.ones([1, 3, 416, 416], dtype=np.float32)
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
output_data = interpreter.get_tensor(output_details[0]['index'])

3. Failure after conversion

If the conversion is successful, but the generated model is wrong, then state what is wrong:

  • Model produces wrong results and/or has lesser accuracy.
  • Model produces correct results, but it is slower than expected.

4. (optional) RNN conversion support

If converting TF RNN to TFLite fused RNN ops, please prefix [RNN] in the title.

5. (optional) Any other info / logs

When model is converted:

2022-07-18 10:49:42.054397: W tensorflow/compiler/mlir/lite/flatbuffer_export.cc:1901] TFLite interpreter needs to link Flex delegate in order to run the model since it contains the following Select TFop(s): Flex ops: FlexConv2D, FlexRange Details: tf.Conv2D(tensor<?x?x?x?xf32>, tensor<3x3x12x24xf32>) -> (tensor<?x?x?x24xf32>) : {data_format = “NHWC”, device = “”, dilations = [1, 1, 1, 1], explicit_paddings = [], padding = “VALID”, strides = [1, 1, 1, 1], use_cudnn_on_gpu = true} tf.Range(tensor<i64>, tensor<i64>, tensor<i64>) -> (tensor<?xi64>) : {device = “”} See instructions: https://www.tensorflow.org/lite/guide/ops_select

Model fails on invoke with:

INFO: Created TensorFlow Lite delegate for select TF ops. 2022-07-18 10:28:53.074011: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. INFO: TfLiteFlexDelegate delegate: 13 nodes delegated out of 797 nodes with 3 partitions. RuntimeError: tensorflow/lite/kernels/concatenation.cc:158 t->dims->data[d] != t0->dims->data[d] (9 != 13)Node number 250 (CONCATENATION) failed to prepare.

About this issue

  • Original URL
  • State: closed
  • Created 2 years ago
  • Comments: 27 (13 by maintainers)

Most upvoted comments

@LukeBoyer could you take a look on this issue? Thanks!