tensorflow: Cannot quantize part of a model

Please make sure that this is a bug. As per our GitHub Policy, we only address code/doc bugs, performance issues, feature requests and build/installation issues on GitHub. tag:bug_template

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 18.04
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.5.0-dev20201208
  • Python version: 3.7

Standalone code to reproduce the issue I follow the tutorial here to see if I can only quantize part of a model. Here is my code:

i = tf.keras.Input(shape=(20,))
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(i)
x = tf.keras.layers.Dense(10)(x)
x = tfmot.quantization.keras.quantize_annotate_layer(tf.keras.layers.Dense(10))(x)
o = tf.keras.layers.Flatten()(x)
annotated_model = tf.keras.Model(inputs=i, outputs=o)
quant_aware_model = tfmot.quantization.keras.quantize_apply(annotated_model)
converter = tf.lite.TFLiteConverter.from_keras_model(quant_aware_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_tflite_model = converter.convert()
pathlib.Path('/tmp/tmp.tflite').write_bytes(quantized_tflite_model)

Basically I want to quantize the 1st and 3rd dense layers of a model. And here is how the resultant model looks like: Screen Shot 2020-12-30 at 8 02 19 PM Apparently, the 2nd quantization goes to the wrong place…

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Reactions: 1
  • Comments: 23 (6 by maintainers)

Most upvoted comments

Any update on adding selective quantization of layers to TfLiteConverter, I’d like to see this added too