tensorflow: Best methods to debug and identify dynamic-sized tensors to work around: TensorFlow Lite Error: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors?

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 11.2.3
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: iPhone 12 pro
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): v1.12.1-53831-ga8b6d5ff93a 2.5.0-rc0
  • Python version: 3.8

Describe the current behavior Converted Google’s Repnet Model to a tflite model to run on ios. I appear to be using too much ram to be able to run with the cpu, so I am trying to run on gpu. Except TensorFlow Lite Error: Attempting to use a delegate that only supports static-sized tensors with a graph that has dynamic-sized tensors. is being thrown at runtime on ios.

Describe the expected behavior I understand dynamic-sized tensors cannot be used with tflite (edit: Tflite GPU), but how can I debug to identity the use of dynamic-sized tensors?

Standalone code to reproduce the issue https://colab.research.google.com/github/google-research/google-research/blob/master/repnet/repnet_colab.ipynb

# converting to tflite with
model = get_repnet_model(PATH_TO_CKPT)

tf.keras.models.save_model( model,
    "repnet_savedmodel"
)

# Convert the model using TFLiteConverter
converter = tf.lite.TFLiteConverter.from_saved_model("repnet_savedmodel")
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE] # .OPTIMIZE_FOR_SIZE]
converter.target_spec.supported_ops = [
    tf.lite.OpsSet.TFLITE_BUILTINS, # enable TensorFlow Lite ops.
    tf.lite.OpsSet.SELECT_TF_OPS # enable TensorFlow ops.
]
tflite_model = converter.convert()
with open("repnet.tflite", 'wb') as f:
  f.write(tflite_model)

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 16 (6 by maintainers)

Most upvoted comments

Actually, TFLite supports dynamic dimension tensors but GPU acceleration is not supported for dynamic dimension tensors.

If the CPU based graph execution does satisfy your case, you can consider running the graph without enabling GPU delegate.

If the graph does not have any If or While ops and all the tensors can have static shapes (you can determine them with the Netron visualizer), the graph can be accelerated with GPU acceleration.

Setting the dynamic dimension input tensors with the certain values might fix some or all of the tensors to have static shapes.

V1 converter APIs can have an easy way to override the input shapes. Please refer to this and take a look at the input_shape argument.