tensorflow: Can't allocate memory for the interpreter in tflite

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Nope, using tensorflow-for-poets
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS High Sierra
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 1.7.1
  • Python version: Python 2.7.10
  • Bazel version (if compiling from source): N/A
  • GCC/Compiler version (if compiling from source): N/A
  • CUDA/cuDNN version: N/A
  • GPU model and memory: N/A
  • Exact command to reproduce: I added a custom tflite model (converted .pb using TOCO) and replaced graph.lite with the new custom model and built the app, but it crashes on runtime.

Detailed Description

I created a custom TensorFlow model and converted it to TOCO (as described in the tensorflow-for-poets tutorial) and I replaced the old graph.lite file with my custom model and changed nothing else in the code. When I run the app, I get the following runtime error:

Process: android.example.com.tflitecamerademo, PID: 29160
    java.lang.RuntimeException: Unable to start activity ComponentInfo{android.example.com.tflitecamerademo/com.example.android.tflitecamerademo.CameraActivity}: java.lang.NullPointerException: Can not allocate memory for the interpreter

Fixes Already Tried

  • Ensuring that Android Studio/Gradle pick up on tensorflow-lite:0.1.7 changes (#19051). Currently, my build.gradle file says compile 'org.tensorflow:tensorflow-lite:+', but changing it to compile 'org.tensorflow:tensorflow-lite:0.1.7' doesn’t resolve the issue either.
  • Running TOCO with the --change_concat_input_ranges=false flag.

About this issue

  • Original URL
  • State: closed
  • Created 6 years ago
  • Comments: 29 (11 by maintainers)

Most upvoted comments

same problem here

I encountered a similar problem when trying to convert pb to tflite using the toco converter (via bazel). I managed to get rid of the problem by passing in the input_shape argument in the command line, e.g., --input_shape=1,224,224,3. Just adding here as a reference.

@damhurmuller I switched to compiling the tf nightly version but I still get the following error:

Caused by: com.google.firebase.ml.common.FirebaseMLException: Internal error has occurred when executing Firebase ML tasks W/System.err: at com.google.android.gms.internal.firebase_ml.zzhg.zza(Unknown Source) W/System.err: ... 5 more W/System.err: Caused by: java.lang.NullPointerException: Can not allocate memory for the interpreter W/System.err: at org.tensorflow.lite.NativeInterpreterWrapper.createInterpreter(Native Method) W/System.err: at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:63) W/System.err: at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:51) W/System.err: at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:90) W/System.err: at com.google.android.gms.internal.firebase_ml.zzif.zzfm(Unknown Source) W/System.err: at com.google.android.gms.internal.firebase_ml.zzhr.zzfp(Unknown Source) W/System.err: at com.google.android.gms.internal.firebase_ml.zzhr.call(Unknown Source) W/System.err: at com.google.android.gms.internal.firebase_ml.zzhg.zza(Unknown Source) W/System.err: ... 5 more

@damhurmuller I’m doing the same thing as you and running into issues. Did you manage to find a solution?