tensorflow: TFLite GPU Delegate has problem with MobileNetV2
System information
- Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
- OS Platform and Distribution : Linux Ubuntu 16.04
- Mobile device : Samsung Galaxy S9 Android 8.0 - Nexus 10 Android 5.1
- TensorFlow installed from (source or binary): Binary
- TensorFlow version (use command below): b’v1.12.0-rc2-3-ga6d8ffae09’ 1.12.0 (0.0.0-gpu-experimental for mobile device)
- Python version: 3.6
Describe the current behavior GPU delegate has problem with MobileNetV2. When I select GPU from device list at tflite demo project (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/java/demo) the app crashes. The only things I’ve changed on this project was changing MobileNet V1 float model to MobileNet v2. The MobileNetV2 model is taken from “https://tfhub.dev/google/imagenet/mobilenet_v2_050_160/classification/2” , retrained by “https://github.com/tensorflow/hub/blob/master/examples/image_retraining/retrain.py” script and converted to tflite format using the following command:
tflite_convert \
--output_file=graph.tflite \
--graph_def_file=retrained_graph.pb \
--input_arrays=Placeholder \
--output_arrays=final_result
--input_shapes=1,160,160,3
All the necessary changes (such as changing the graph name, input size, etc.) in the ImageClassifierFloatMobileNet class is made.
Logs:
2019-01-23 13:51:12.091 22222-22294/android.example.com.tflitecamerademo E/AndroidRuntime: FATAL EXCEPTION: CameraBackground
Process: android.example.com.tflitecamerademo, PID: 22222
java.lang.IllegalArgumentException: Internal error: Failed to apply delegate: GpuDelegate Prepare: Dimension is empty.Node number 68 (GpuDelegate) failed to prepare.
at org.tensorflow.lite.NativeInterpreterWrapper.applyDelegate(Native Method)
at org.tensorflow.lite.NativeInterpreterWrapper.init(NativeInterpreterWrapper.java:83)
at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:60)
at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:224)
at com.example.android.tflitecamerademo.ImageClassifier.recreateInterpreter(ImageClassifier.java:168)
at com.example.android.tflitecamerademo.ImageClassifier.useGpu(ImageClassifier.java:176)
at com.example.android.tflitecamerademo.Camera2BasicFragment.lambda$updateActiveModel$0$Camera2BasicFragment(Camera2BasicFragment.java:379)
at com.example.android.tflitecamerademo.Camera2BasicFragment$$Lambda$0.run(Unknown Source:8)
at android.os.Handler.handleCallback(Handler.java:789)
at android.os.Handler.dispatchMessage(Handler.java:98)
at android.os.Looper.loop(Looper.java:164)
at android.os.HandlerThread.run(HandlerThread.java:65)
2019-01-23 13:51:12.102 4411-8924/? E/CameraDeviceClient: Disconnect from CameraDeviceClient
About this issue
- Original URL
- State: closed
- Created 5 years ago
- Comments: 16 (10 by maintainers)
@aselle Thanks! 👍
@ramtin2080
The GPU delegate currently does not support dynamic tensor sizes. Is it possible for you to explicitly specify the tensor dimensions of module_apply_default/hub_input/Mul/y and module_apply_default/hub_input/Sub/y in your TF graph, so that all tensor dimensions are known in advance?
@ramtin2080
I just noticed:
Let me try to take a look at these constants. Maybe we can apply that in the Java layer directly. Stay tuned.
@ramtin2080
Aite. I looked into the constants, and it looks like it’s doing 2.0 * X - 1.0 which is remapping the values from 0.0 - 1.0 to -1.0 - 1.0. So let’s do that.
Again, that’s the code in
ImageClassifierFloatMobileNet.java
.