tensorflow: Tensorflow lite NnApiDelegate crashes on Pixel 3a XL

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Android
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Arch Linux
  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: Pixel 3A XL running on Android 10
  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): 2.1.0
  • Python version: 3.7.0

You can collect some of this information using our environment capture script You can also obtain the TensorFlow version with: 1. TF 1.0: python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" 2. TF 2.0: python -c "import tensorflow as tf; print(tf.version.GIT_VERSION, tf.version.VERSION)"

Describe the current behavior Run tensorflow lite and specify the intepreter options with NnApiDelegete()

            val opts = Interpreter.Options()
            opts.setNumThreads(NUM_LITE_THREADS)
            opts.addDelegate(NnApiDelegate())
            return@use Interpreter(modelBuffer, opts)

Describe the expected behavior The code runs fine on emulator

Code to reproduce the issue Add NnApiDelegete and the code crashes on Pixel 3a XL

Other info / logs

020-02-10 21:38:54.173 18949-18999/org.liberty.android.nlplib_demo E/ExecutionBuilder: ANeuralNetworksExecution_setInputFromMemory: Setting with operand type that is not fully specified
    
    --------- beginning of crash
2020-02-10 21:38:54.182 18949-18949/org.liberty.android.nlplib_demo E/AndroidRuntime: FATAL EXCEPTION: main
    Process: org.liberty.android.nlplib_demo, PID: 18949
    java.lang.IllegalArgumentException: Internal error: Failed to run on the given Interpreter: NN API returned error ANEURALNETWORKS_BAD_DATA at line 3126 while associating NNAPI execution input with a memory object.
    
    Node number 2428 (TfLiteNnapiDelegate) failed to invoke.
    
        at org.tensorflow.lite.NativeInterpreterWrapper.run(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.run(NativeInterpreterWrapper.java:154)
        at org.tensorflow.lite.Interpreter.runForMultipleInputsOutputs(Interpreter.java:311)
        at org.liberty.android.nlplib.gpt2.GPT2Client$generate$1.invokeSuspend(GPT2Client.kt:70)
        at org.liberty.android.nlplib.gpt2.GPT2Client$generate$1.invoke(Unknown Source:10)
        at kotlinx.coroutines.flow.SafeFlow.collect(Builders.kt:53)
        at org.liberty.android.nlplib_demo.MainActivity$onCreate$4$1$1.invokeSuspend(MainActivity.kt:65)
        at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
        at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)
        at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)
        at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740)

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 20 (6 by maintainers)

Most upvoted comments

Yes, the patch works and prevent the crash. But the performance regressed by more than 50%.

Thanks a lot @freedomtan .

Also we are working on limiting the number of partitions when using NNAPI delegate, as it is never a good recipe for performance if we partition a graph into many small subgraphs.

Also, we are considering supporting SQUARED_DIFFERENCE with the ops already supported by NNAPI, e.g. MUL and SUB. Hopefully that could also help with models like this.