tensorflow: RuntimeError: Quantization not yet supported for op: CUSTOM in Post Training Quantization

System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (or github SHA if from source): r1.15

Following error occurs when implementing post training full integer quantization of weights and activations. I am using SSD Mobilenet 2 model from TensorFlow model zoo.

I am following exact steps given on TensorFlow website. I think it is due to add_postprocessing_op parameter in export_tflite_ssd_graph.py. INT8 implementation is not available for that.

How this issue can be solved for post training quantization? Is there any temporary work around?

Any other info / logs Code to Reproduce

import tensorflow as tf
def representative_dataset_gen():
  for _ in range(num_calibration_steps):
    # Get sample input data as a numpy array in a method of your choosing.
    yield [input]

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8
converter.representative_dataset = representative_dataset_gen
tflite_quant_model = converter.convert()

Traceback (most recent call last): File “weight_quantize.py”, line 46, in <module> tflite_quant_model = converter.convert() File “/home/morrisc/.local/lib/python3.5/site-packages/tensorflow_core/lite/python/lite.py”, line 993, in convert inference_output_type) File “/home/morrisc/.local/lib/python3.5/site-packages/tensorflow_core/lite/python/lite.py”, line 239, in _calibrate_quantize_model inference_output_type, allow_float) File “/home/morrisc/.local/lib/python3.5/site-packages/tensorflow_core/lite/python/optimize/calibrator.py”, line 78, in calibrate_and_quantize np.dtype(output_type.as_numpy_dtype()).num, allow_float) File “/home/morrisc/.local/lib/python3.5/site-packages/tensorflow_core/lite/python/optimize/tensorflow_lite_wrap_calibration_wrapper.py”, line 115, in QuantizeModel return _tensorflow_lite_wrap_calibration_wrapper.CalibrationWrapper_QuantizeModel(self, input_py_type, output_py_type, allow_float) RuntimeError: Quantization not yet supported for op: CUSTOM

However, if I don’t enforce full integer quantization for all ops and use integer input and output. It works. But such model won’t run Edge TPU

About this issue

  • Original URL
  • State: closed
  • Created 5 years ago
  • Comments: 20 (4 by maintainers)

Most upvoted comments

Try the combination of flags below. The lines commented out are flags that need to be removed.

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.DEFAULT] #converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8] #converter.inference_input_type = tf.uint8 #converter.inference_output_type = tf.uint8 converter.allow_custom_ops = True converter.representative_dataset = representative_dataset_gen converter.experimental_new_converter = False tflite_quant_model = converter.convert()

Just include the newest new quantizer. And try to use the saved model export without the preprocess inside it.

import tensorflow as tf

# Convert the model
converter = tf.lite.TFLiteConverter.from_saved_model('/path/saved_model')
def representative_data_gen():
    img = (np.clip(representative_image,-1,1).astype(np.float32)
    yield [img]

converter.optimizations = [tf.lite.Optimize.DEFAULT]
# This ensures that if any ops can't be quantized, the converter throws an error
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

# """comment next 4 lines if not quantize input"""

# converter.quantized_input_stats = {'serving_default_input':[0,1]}
converter._experimental_new_quantizer = True
# converter.inference_input_type = tf.uint8
# converter.inference_output_type = tf.float32

converter.allow_custom_ops = True

converter.representative_dataset = representative_data_gen
tflite_model = converter.convert()

# Save the model.
with open('model.tflite', 'wb') as f:
    f.write(tflite_model)

I think my issue is resolved, thanks. Best, -Joseph ________________________________ From: Alfred Sorten Wolf notifications@github.com Sent: Monday, February 1, 2021 9:10:51 AM To: tensorflow/tensorflow tensorflow@noreply.github.com Cc: Aribido, Oluwaseun Joseph oja@gatech.edu; Comment comment@noreply.github.com Subject: Re: [tensorflow/tensorflow] RuntimeError: Quantization not yet supported for op: CUSTOM in Post Training Quantization (#33925) Hi There, We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help. This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information. — You are receiving this because you commented. Reply to this email directly, view it on GitHubhttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Ftensorflow%2Ftensorflow%2Fissues%2F33925%23issuecomment-770884852&data=04%7C01%7Coja%40gatech.edu%7C460a046d0bf14bc2a40d08d8c6bb3d68%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637477854792055669%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZXC3%2B8%2FQZMCJ8su9ma9klP9%2FHz3Fr5%2FzNaLCXCB9qQs%3D&reserved=0, or unsubscribehttps://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAFMF2WJX433EPEPHWYXU65TS42Y6XANCNFSM4JIAIEAQ&data=04%7C01%7Coja%40gatech.edu%7C460a046d0bf14bc2a40d08d8c6bb3d68%7C482198bbae7b4b258b7a6d7f32faa083%7C0%7C0%7C637477854792065664%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=AiNX5PQB4GG7IAa0xGBjejHKbikH%2Bez5PEk2wvZ07kM%3D&reserved=0.

hei, how you resolve that?

converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.representative_dataset = representative_dataset

converter.allow_custom_ops = True

converter.experimental_new_converter = False tflite_quant_model = converter.convert()

Finally this gave me an output tflite file now…will update you on its correctness

Hi There,

We are checking to see if you still need help on this, as you are using an older version of tensorflow which is officially considered end of life . We recommend that you upgrade to the latest 2.x version and let us know if the issue still persists in newer versions. Please open a new issue for any help you need against 2.x, and we will get you the right help.

This issue will be closed automatically 7 days from now. If you still need help with this issue, please provide us with more information.

Sorry for the delayed response. Please remove this line: converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]

The custom op needs to be kept in float execution. This line is forcing quantization to happen which doesn’t work.