react-native-vision-camera: šŸ› iOS barcode scanning is very slow (unusable in some cases)

What’s happening?

On iOS the native API’s are performing very poorly (all devices that I have tested on). The time to scan a regular barcode is so slow and inaccurate compared to Android that uses MLKit. The QR scanning on iOS is fine, but the other types of barcodes (vertical lines, ie: upc/code/etc) I have tested have been almost unusable, what takes android 2 seconds to scan/verify will take iOS almost 10 times as long sometimes.

Some additional context - when I say scan/ā€œverifyā€ we are waiting to see the same code 4 times before we count that as a successful scan.

The framerate on iOS is good, but the time to scan is not adequate.

We were previously using a fork of this library https://github.com/rodgomesc/vision-camera-code-scanner which also uses MLKit on iOS and it was significantly better and matched the Android implementation/performance.

Could iOS be changed to use MLKit instead of the native API’s?

Reproduceable Code

const codeScanner = useCodeScanner({
    codeTypes: [
      'code-128',
      'code-39',
      'code-93',
      'codabar',
      'ean-13',
      'ean-8',
      'itf',
      'upc-e',
      'qr',
      'pdf-417',
      'aztec',
      'data-matrix',
    ],
    onCodeScanned: (codes) => {
// ...
},
  });

Relevant log output

NA

Camera Device

NA

Device

iPhone 13, iPad Mini

VisionCamera Version

3.5.1

Can you reproduce this issue in the VisionCamera Example app?

Yes, I can reproduce the same issue in the Example app here

Additional information

About this issue

  • Original URL
  • State: closed
  • Created 8 months ago
  • Reactions: 3
  • Comments: 24 (7 by maintainers)

Most upvoted comments

@lc-mm to be honest - no.

As mentioned previously, the Code Scanner API is available under an AVCaptureMetadataOutput channel on iOS. This API is really straight forward, that’s all the code needed to scan QR codes on iOS: https://github.com/mrousavy/react-native-vision-camera/blob/8f986a45ea0aa3398e45a806a264fbcc03278971/package/ios/Core/CameraSession%2BConfiguration.swift#L130-L149

If I wanted to use MLKit for this, I would have to:

  1. Add a Video Channel output: https://github.com/mrousavy/react-native-vision-camera/blob/8f986a45ea0aa3398e45a806a264fbcc03278971/package/ios/Core/CameraSession%2BConfiguration.swift#L107-L120
  2. Since we now have a third variable depending on the Video Channel (video + frameProcessor + codeScanner), I need to smartly decide which resolution I am going to pick and if there even needs to be a video channel.
  3. I need to add the MLKit dependency (I think that’s 2.4 MB added to your app/VisionCamera, no matter if you use QR codes or not!)
  4. I need to initialize the MLKit Barcode Detector:
    let format = .all
    let barcodeOptions = BarcodeScannerOptions(formats: format)
    let barcodeScanner = BarcodeScanner.barcodeScanner(options: barcodeOptions)
    
  5. I need to call it in the Video Capture Output delegate, if a code scanner is added:
    barcodeScanner.process(visionImage) { features, error in
      guard error == nil, let features = features, !features.isEmpty else {
        // Error handling
        return
      }
      // Recognized barcodes
    }
    
  6. I need to call the callback there then

Here’s the downsides of the video channel approach compared to the AVCaptureMetadataOutput approach:

  1. The separation of concern is no longer here - it’s all nested in a video channel.
  2. The video channel is running with high resolution frames:
  3. If you don’t use a format, it will default to the highest resolution (4k frames) - which is going to be very slow in terms of MLKit Code Scanning performance
  4. If you use a format, it will use the format’s width and height for the MLKit Code Scanner. This means, you can’t really do high quality video recordings/frame processing and codescanning in one - you have to either decide for code scanning (low resolution video) or video recordings (very slow code scanning) since again, it’s all in one queue.
  5. We have an added dependency in pods
  6. It might go out of sync and we need to upgrade it
  7. It might have additional security vulnerabilities
  8. It increases the app size by 2.4MB - no matter if you use QR codes or not!
  9. We all know that Google pods (e.g. Firebase) sometimes just won’t build. That sucks, especially if you’re not even using the QR code scanner
  10. We have a bit more complex code - it’s all in the video channel instead of simply separating it to a separate AVCaptureMetadataOutput channel.
  11. We cannot benefit from platform-level optimizations. So far, I am not sure what optimizations the native iOS platform exactly does, but I believe that their AVCaptureMetadataOutput API is very battery/energy efficient. It might use pure GPU algorithms, benefit from raw YUV buffers, etc. In theory, it should be fast and lightweight.
  12. Yes, I know it is apparently not fast for Barcodes. But this is a bug in apple’s code which will be fixed.
  13. We can separate the video output (video + frameProcessor) from the QR code scanner (codeScanner), meaning it can detect Barcodes without a hickup while you’re recording a video or processing Frames. It is running on a separate output channel. That’s simply not possible with the video approach

So in short; Yes I know it absolutely sucks that the code scanner is only fast on QR codes for iOS, this is most definitely a bug since I don’t think I’m doing anything terribly wrong here. I can try to spend some more time to investigate this in a native app (no React Native) thanks to @ticketscloud for sponsoring me/this! ā¤ļø

I just tested @mgcrea/vision-camera-barcode-scanner and can confirm that the implementation there that uses the Vision APIs is much better at detecting barcodes than AVFoundation implementation

Is using either the Vision APIs or MLKit on iOS an option? In the current state it feels as if the iOS implementation is only good for QR codes

@mrousavy when I started this thread I said that it was an apple issue and presented alternatives. Is this ever something you would change so that the performance of this feature is acceptable for barcodes on iOS? Currently it is not for anything other than QR codes.

I just tested @mgcrea/vision-camera-barcode-scanner and can confirm that the implementation there that uses the Vision APIs is much better at detecting barcodes than AVFoundation implementation

Is using either the Vision APIs or MLKit on iOS an option? In the current state it feels as if the iOS implementation is only good for QR codes

For future visitors of this issue: I had the same issue with very slow barcode scanning in iOS with vision-camera v3. I have upgraded yesterday to v4.0.1 and this solved the issue for me, scanning is now really fast. Thank you @mrousavy great job!

@ecaii We had this issue as well and we’ve decided to use a frameprocessor plugin: https://github.com/mgcrea/vision-camera-barcode-scanner for the iOS part which is still faster & more reliable

An update - I’m seeing barcodes that fail be recognized when scanned horizontally are consistently recognized when the barcode is turned vertically.

The phone orientation is vertical during all of this testing, for what it’s worth.

I have the same experience. When I use vision-camera-v2 in our app to scan a barcode, it takes less than 1/100 second. But with vision-camera-v3 (3.6.4) on the same device I have to rotate the phone to fit the barcode vertically. (I guess thus it makes the barcode image larger.)

vision-camera-v2:

https://github.com/mrousavy/react-native-vision-camera/assets/46003022/683b1884-1d1a-42bf-8b10-d33b92632823

vision-camera-v3:

https://github.com/mrousavy/react-native-vision-camera/assets/46003022/592fd717-02d8-425e-9dba-b2755a19d8e8

@mrousavy

An update - I’m seeing barcodes that fail be recognized when scanned horizontally are consistently recognized when the barcode is turned vertically.

The phone orientation is vertical during all of this testing, for what it’s worth.

Just wanted to chime in here and say that I too am seeing this issue. Non-QR barcodes take a considerable amount of time to be recognized (in the neighborhood of 8-10 seconds). I’ve tinkered with enableBufferCompression and format as @mrousavy recommended, but with no luck thus far.

And to add something to the discussion, rather than just throwing in a useless ā€œ+1ā€:

Did you try to run this on the latest iOS 17 beta, or on iOS 16 or older?

I’ve tried iOS 16.4, iOS 17.0, and iOS 17.1 (released today).

I just tested @mgcrea/vision-camera-barcode-scanner and can confirm that the implementation there that uses the Vision APIs is much better at detecting barcodes than AVFoundation implementation

Is using either the Vision APIs or MLKit on iOS an option? In the current state it feels as if the iOS implementation is only good for QR codes

MLKit on iOS is much better than iOS innner scan speed

I noticed this too, after some scans it looks like its needs more and more time to scan. On android it works fine. for ios fix i use temporary external barcode scanner ā€œ@mgcrea/vision-camera-barcode-scannerā€ but only works with react-native-vision-camera 3.4.1