mediapipe: TFLite model is slow only on iOS
Please make sure that this is a bug and also refer to the troubleshooting, FAQ documentation before raising any issues.
System information (Please provide as much relevant information as possible):
- Have I written custom code (as opposed to using a stock example script provided in MediaPipe): yes
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04, Android 11, iOS 14.4): iOS 14.6
- Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: iPhone Xs
- Browser and version (e.g. Google Chrome, Safari) if the issue happens on browser: n/a
- Programming Language and version ( e.g. C++, Python, Java): C++
- MediaPipe version: Building from
master
- Bazel version (if compiling from source): 4.2.1
- Solution ( e.g. FaceMesh, Pose, Holistic ): n/a
- Android Studio, NDK, SDK versions (if issue is related to building in Android environment): n/a
- Xcode & Tulsi version (if issue is related to building for iOS): 12.5
Describe the current behavior:
I have a .tflite
model that I’m trying to run within a mediapipe graph. When I run the graph on Android, inference runs quickly. When I run the model directly using Tensorflow Lite 2.7 on iOS, inference runs quickly. However, when I run the graph on iOS, inference runs very slowly (~11 seconds).
Describe the expected behavior:
Inference should be quick on iOS.
Standalone code to reproduce the issue:
Here is the contents of my graph definition:
input_stream: "IMAGE:in_stream"
node: {
calculator: "ImageToTensorCalculator"
input_stream: "IMAGE:in_stream"
output_stream: "TENSORS:input_tensors"
output_stream: "MATRIX:transform_matrix"
options: {
[mediapipe.ImageToTensorCalculatorOptions.ext] {
output_tensor_width: 640
output_tensor_height: 640
keep_aspect_ratio: false
output_tensor_float_range {
min: -1.0
max: 1.0
}
border_mode: BORDER_ZERO
}
}
}
node {
calculator: "InferenceCalculator"
input_stream: "TENSORS:input_tensors"
output_stream: "TENSORS:detection_tensors"
options: {
[mediapipe.InferenceCalculatorOptions.ext] {
model_path: "my-model.tflite"
delegate { tflite {} }
}
}
}
If inspecting the tflite model itself would be useful, I can check with my team that it would be okay to upload that. In the meantime, I used the tflite model visualizer script to inspect the ops:

Does anything stand out here as an op that would be slow?
Other info / Complete Logs : Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Reactions: 1
- Comments: 18
Hi @PrinceP @hadon @NikolayChirkov. Sorry to ping again. Any ideas on your end would be appreciated. I’m happy to try any ideas you have for diagnosing the issue. Thanks!
Hi @PrinceP @hadon @NikolayChirkov, happy new year!
We still haven’t been able to figure out this performance issue and are blocked on this issue. Any help in diagnosing the performance problem would be very appreciated, thanks! Let me know if you have problems with the model I posted above.
@PrinceP I just got everything building with v0.8.9, and unfortunately I’m seeing the same perf issue.
bazel build — copt=-fembed-bitcode — apple_bitcode=embedded — config=ios_arm64
Is the above command used for ios build? Also can you try once more on the new branch - Releases v0.8.9