tensorflow: TensorFlow Lite error on iOS: "Make sure you apply/link the Flex delegate before inference."

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): macOS 10.15, iPhone Simulator 12

  • Mobile device (e.g. iPhone 8, Pixel 2, Samsung Galaxy) if the issue happens on mobile device: iPhone Simulator 12

  • TensorFlow installed from (source or binary): Installed from Cocoa Pod using the following specification:

    pod 'TensorFlowLiteSwift', '0.0.1-nightly.20200916', :subspecs => ['CoreML', 'Metal'] 
    pod 'TensorFlowLiteSelectTfOps', '0.0.1-nightly.20200916'
    

Describe the current behavior

When I load my model using TensorFlow Lite and attempt to invoke it (interpreter.invoke()), I get the following error:

2020-11-18 16:08:54.145490-0800 OpenRDTCV[13328:3534503] Initialized TensorFlow Lite runtime.
2020-11-18 16:08:56.458094-0800 OpenRDTCV[13328:3534503] Regular TensorFlow ops are not supported by this interpreter. Make sure you apply/link the Flex delegate before inference.
2020-11-18 16:08:56.458232-0800 OpenRDTCV[13328:3534503] Node number 312 (FlexSize) failed to prepare.

I have read and followed Select TensorFlow operators and added the -force_load flag (see screenshot below).

Screen Shot 2020-11-18 at 4 19 39 PM

I have also tried a variety of versions of TensorFlowLiteSwift/TensorFlowLiteSelectTfOps. However, the error persists. Any help appreciated!

Here is the relevant Swift source up until the failing invoke() line:

    guard let pixelBuffer = CVPixelBuffer.buffer(from: image) else {
      print("Could not convert image to CV buffer")
      return []
    }
    
    let sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)
    assert(sourcePixelFormat == kCVPixelFormatType_32ARGB ||
             sourcePixelFormat == kCVPixelFormatType_32BGRA ||
               sourcePixelFormat == kCVPixelFormatType_32RGBA)

    let imageChannels = 4
    assert(imageChannels >= 3)

    let scaledSize = CGSize(width: self.inputSize.width, height: self.inputSize.height)
    guard let thumbnailPixelBuffer = pixelBuffer.centerThumbnail(ofSize: scaledSize) else {
      print("Error: could not crop image")
      return []
    }

    let interval: TimeInterval
    let outputTensor: Tensor
    do {
      let inputTensor = try interpreter.input(at: 0)

      // Remove the alpha component from the image buffer to get the RGB data.
      guard let rgbData = rgbDataFromBuffer(
        thumbnailPixelBuffer,
        isModelQuantized: isModelQuantized
      ) else {
        print("Failed to convert the image buffer to RGB data.")
        return []
      }

      // Copy the RGB data to the input `Tensor`.
      try interpreter.copy(rgbData, toInputAt: 0)

      // Run inference by invoking the `Interpreter`.
      let startDate = Date()
      try interpreter.invoke()

cc @jdduke who I believe wrote this commit that seems to be relevant.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 20 (5 by maintainers)

Most upvoted comments

Thanks!

I was able to get unblocked by running things on an iOS device.

I tried building the framework myself, but I ran into other issues. However, I think those are separate problems and we can close this issue. An update to documentation would be great! Thanks again!

Possibly related: https://github.com/tensorflow/tensorflow/issues/44879. Which exact version of TFLite and Select Tf Ops framework are you currently using?

cc: @thaink