tensorflow: Fails to convert SplitV with quantization

1. System information

  • Ubuntu20.04
  • TensorFlow installation : conda
  • TensorFlow library 2.8.1

2. Code

When converting a model from Tensorflow to TFlite, I run into errors (regarding SplitV, check error below) when setting the pipeline with Int8 quantization, but not when using regular conversion (without quantization).

I basically follow those instructions:

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.int8  # or tf.uint8
converter.inference_output_type = tf.int8  # or tf.uint8
tflite_quant_model = converter.convert()

I get the following error:

There are unresolved custom ops: [SplitV]Encountered unresolved custom op: SplitV.

It works fine if I add the SELECT_TF_OPS, but I want to keep the interpreter as small and fast as possible. So my assumption is that the SplitV operator in TFlite is not available with the Int8 quantization? How can I know beforehand which operator is available with the Int8 quantization?

About this issue

  • Original URL
  • State: closed
  • Created a year ago
  • Comments: 16 (5 by maintainers)

Most upvoted comments

Hi @pjpratik & @DiXcipuli,

Although we have support for splitV op in TFLite, the datatype for this is not complete: for the size_split, split_dim inputs, TFLite need them to be in int32, but the source model is having int64 as input.

source:https://github.com/tensorflow/tensorflow/blob/14f08e56d46936de39a9a0e09225eebd9ad836d9/tensorflow/compiler/mlir/lite/ir/tfl_ops.td#L3728