edgetpu: ERROR: Didn't find op for builtin opcode 'TRANSPOSE_CONV' version '3'

Hi, I am trying to compile a model that uses tensorflow.keras.layers.Conv2DTranspos and get the following error:

$ edgetpu_compiler model_quant.tflite
Edge TPU Compiler version 2.1.302470888
ERROR: Didn't find op for builtin opcode 'TRANSPOSE_CONV' version '3'

ERROR: Registration failed.

Invalid model: model_quant.tflite
Model could not be parsed

I followed the Retrain a classification model using post-training quantization notebook. I’ve installed tf-nightly:

$ pip list | grep tf
tf-estimator-nightly           2.3.0.dev2020061401
tf-nightly                     2.3.0.dev20200614
tflite-runtime                 2.1.0.post1

Create a TensorFlow Lite model with TFLiteConverter. The following creates a basic (un-quantized) TensorFlow Lite model:

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
tflite_model = converter.convert()
with open('model.tflite', 'wb') as f:
    f.write(tflite_model)

Convert the model again with post-training quantization:

dataset_dir = "path/to/dataset"

# A generator that provides a representative dataset
def representative_data_gen():
    dataset_list = tf.data.Dataset.list_files(dataset_dir + '*.jpg')
    dataset_list = dataset_list.take(110)
    for i in range(100):
        image = next(iter(dataset_list))
        image = tf.io.read_file(image)
        image = tf.io.decode_jpeg(image, channels=3)
        image = tf.image.resize(image, [IMAGE_SIZE, IMAGE_SIZE])
        image = tf.cast(image / 255., tf.float32)
        image = tf.expand_dims(image, 0)
        yield [image]

# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.int8]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
# These set the input and output tensors to uint8 (added in r2.3)
converter.inference_input_type = tf.uint8    # also tried with tf.int8
converter.inference_output_type = tf.uint8 # also tried with tf.int8
# And this sets the representative dataset so we can quantize the activations
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()

with open('model_quant.tflite', 'wb') as f:
    f.write(tflite_model)

Finally tried to compile the model_quant.tflite with the edgetpu-compiler (version 2.1.302470888) as mentioned above.

According to the documentation Transpose_CONV should be one of the supported operations.

Any idea how to fix this error? Thank you.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 18 (1 by maintainers)

Most upvoted comments

@Namburger thanks looking into this and the provided link. I’ve also tried using tensorflow 2.2 which results in the following error:

$ edgetpu_compiler model_quant.tflite
Edge TPU Compiler version 2.1.302470888

Internal compiler error. Aborting!

I am not using the model from the retrain script but my own. It contains:

from tensorflow.keras.layers import Conv2DTranspose
from tensorflow.keras.layers import LeakyReLU

...

x = Conv2DTranspose(f, (3, 3), strides=2, padding="same")(x)
x = LeakyReLU(alpha=0.2)(x)

...

I guess the problem is that LeakyReLU is not supported? However, the error mentions: TRANSPOSE_CONV is not supported (when using tf-nightly 2.3).

I’ll try to use ReLU instead of LeakyReLU and test with both tensorflow versions 2.2 and 2.3 (nightly).

@fjp I have an updates: The sweet spot is with tensorflow2.2 for our released compiler right now if you want to downgrade to that. FYI, you may need to uninstall tf-nightly completely and install tf2.2 😕

 » python3 -m pip install tensorflow==2.2
 » python3 -c 'print(__import__("tensorflow").__version__)'
2.2.0

Here is an example for making this model with Conv2DTranspose (with tf2.1* the tflite converter fails because it wasn’t supported yet and with tf2.3* edgetpu_compiler fails):

import tensorflow as tf
import numpy as np

# Create the base model from the pre-trained MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=(224, 224, 3),
                                              include_top=False,
                                              weights='imagenet')

base_model.trainable = False
model = tf.keras.Sequential([
  base_model,
  tf.keras.layers.Conv2DTranspose(filters=32, kernel_size=3, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.GlobalAveragePooling2D(),
  tf.keras.layers.Dense(units=5, activation='softmax')
])
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.summary()
print('Number of trainable weights = {}'.format(len(model.trainable_weights)))

converter = tf.lite.TFLiteConverter.from_keras_model(model)
# This enables quantization
converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_types = [tf.int8]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
def representative_data_gen():
   for _ in range(100):
       yield [np.random.random((1, 224, 224, 3)).astype(np.float32)]
converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()

with open('model_quant.tflite', 'wb') as f:
    f.write(tflite_model)

Note that these options are only available for tensorflow2.3* and up

converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

Therefore, your IO tensors will still be float.

and compile:

edgetpu_compiler -s model_quant.tflite                                       vunam@penguin
Edge TPU Compiler version 2.1.302470888

Model compiled successfully in 545 ms.

Input model: model_quant.tflite
Input size: 3.09MiB
Output model: model_quant_edgetpu.tflite
Output size: 3.11MiB
On-chip memory used for caching model parameters: 3.33MiB
On-chip memory remaining for caching model parameters: 4.39MiB
Off-chip memory used for streaming uncached model parameters: 0.00B
Number of Edge TPU subgraphs: 1
Total number of operations: 74
Operation log: model_quant_edgetpu.log

Model successfully compiled but not all operations are supported by the Edge TPU. A percentage of the model will instead run on the CPU, which is slower. If possible, consider updating your model to use only operations supported by the Edge TPU. For details, visit g.co/coral/model-reqs.
Number of operations that will run on Edge TPU: 72
Number of operations that will run on CPU: 2

Operator                       Count      Status

DEQUANTIZE                     1          Operation is working on an unsupported data type
MEAN                           1          Mapped to Edge TPU
FULLY_CONNECTED                1          Mapped to Edge TPU
SOFTMAX                        1          Mapped to Edge TPU
ADD                            10         Mapped to Edge TPU
PAD                            5          Mapped to Edge TPU
QUANTIZE                       1          Operation is otherwise supported, but not mapped due to some unspecified limitation
CONV_2D                        35         Mapped to Edge TPU
TRANSPOSE_CONV                 1          Mapped to Edge TPU
RELU                           1          Mapped to Edge TPU
DEPTHWISE_CONV_2D              17         Mapped to Edge TPU

Hope this helps 😃