on
Building QR code scanner for Android using Firebase ML Kit and CameraX
In this tutorial, you will learn how to create QR code scanner using Firebase ML Kit and Jetpack CameraX.
illustration by Katja Dianoff
Introduction
What is CameraX?
CameraX is a Jetpack Support library which was announced at Google I/O 2019. Main goal of the library is to help developers to make camera app development easier by providing consistent and easy to use API. You can read more about CameraX here.
What is Firebase ML Kit?
Firebase ML Kit is a mobile SDK for Android and iOS which was announced at Google I/O 2018. ML Kit comes with common use cases for Natural Language (text recognition, face detection, barcode scanning, image labelling, object detecting & tracking, landmark recognition) and Vision (identifying language of text, translating text, generating smart replies). You can read more about Firebase ML Kit here.
Setting up the project
-
Create a new project in Android Studio from File ⇒ New Project and select Empty Activity from templates. I have given my package name as
com.natigbabayev.qrscanner
- Open app/build.gradle and add Firebase ML Vision and Jetpack CameraX dependencies:
dependencies { //... // Make sure you have correct version of appcompat library implementation 'androidx.appcompat:appcompat:1.1.0-rc01' // Firebase ML Kit dependencies implementation 'com.google.firebase:firebase-ml-vision:21.0.0' // CameraX def camerax_version = "1.0.0-alpha03" implementation "androidx.camera:camera-core:${camerax_version}" implementation "androidx.camera:camera-camera2:${camerax_version}" }
- Open your AndroidManifest.xml file to add required permissions:
<?xml version="1.0" encoding="utf-8"?> <manifest ...> <uses-permission android:name="android.permission.CAMERA" /> ... </manifest>
- Add following code to your AndroidManifest.xml file for configuring your app to automatically download the ML model to the device after your app is installed from the Play Store:
<application ...> ... <meta-data android:name="com.google.firebase.ml.vision.DEPENDENCIES" android:value="barcode" /> </application>
- Open layout file of the main activity (activity_main.xml) and add TextureView. We will use it to stream camera input:
<?xml version="1.0" encoding="utf-8"?> <androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" xmlns:app="http://schemas.android.com/apk/res-auto" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <TextureView android:id="@+id/texture_view" android:layout_width="match_parent" android:layout_height="match_parent" app:layout_constraintTop_toTopOf="parent" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintEnd_toEndOf="parent" /> </androidx.constraintlayout.widget.ConstraintLayout>
- As a last step of our project setup, we need to check if user has granted camera permission. For this, you can go to MainActivity.kt file and add following code:
class MainActivity : AppCompatActivity() { companion object { private const val REQUEST_CAMERA_PERMISSION = 10 } private lateinit var textureView: TextureView override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContentView(R.layout.activity_main) textureView = findViewById(R.id.texture_view) // Request camera permissions if (isCameraPermissionGranted()) { textureView.post { startCamera() } } else { ActivityCompat.requestPermissions(this, arrayOf(Manifest.permission.CAMERA), REQUEST_CAMERA_PERMISSION) } } private fun startCamera() { // We will implement this in next steps. } private fun isCameraPermissionGranted(): Boolean { val selfPermission = ContextCompat.checkSelfPermission(baseContext, Manifest.permission.CAMERA) return selfPermission == PackageManager.PERMISSION_GRANTED } override fun onRequestPermissionsResult(requestCode: Int, permissions: Array<String>, grantResults: IntArray) { if (requestCode == REQUEST_CAMERA_PERMISSION) { if (isCameraPermissionGranted()) { textureView.post { startCamera() } } else { Toast.makeText(this, "Camera permission is required.", Toast.LENGTH_SHORT).show() finish() } } } }
Showing camera input on the screen
CameraX has an abstraction called a use case which let’s you to interact with camera of a device. Currently following use cases are available:
-
Preview: allows you to access a stream of camera input which you can use to display the camera stream in TextureView.
-
Image analysis: allows you to analyze each frame of camera input. We will use this use case to perform image analysis for detecting QR codes in frames using Firebase ML Kit.
-
Image capture: as name indicates, allows you to capture and save a photo
As it is mentioned above, to show camera stream on the screen, we need to use Preview use case. When we create instance of Preview use case, we need to pass PreviewConfig as constructor parameter. So, let’s add following code to our startCamera()
function:
val previewConfig = PreviewConfig.Builder()
// We want to show input from back camera of the device
.setLensFacing(CameraX.LensFacing.BACK)
.build()
val preview = Preview(previewConfig)
The preview use case provides a SurfaceTexture for display. To show camera stream in our textureView
, we need to add listener to preview instance using setOnPreviewOutputUpdateListener()
method:
// ...
preview.setOnPreviewOutputUpdateListener { previewOutput ->
textureView.surfaceTexture = previewOutput.surfaceTexture
}
As CameraX observes a lifecycle to manage camera resources, we need to bind our use case using CameraX.bindToLifecycle(this as LifecycleOwner, preview)
. Here is how startCamera()
function looks like in MainActivity
:
private fun startCamera() {
val previewConfig = PreviewConfig.Builder()
// We want to show input from back camera of the device
.setLensFacing(CameraX.LensFacing.BACK)
.build()
val preview = Preview(previewConfig)
preview.setOnPreviewOutputUpdateListener { previewOutput ->
textureView.surfaceTexture = previewOutput.surfaceTexture
}
CameraX.bindToLifecycle(this as LifecycleOwner, preview)
}
Detecting QR code
Now we need to detect QR codes from camera input using ImageAnalysis use case. For this, we need to create class named QrCodeAnalyzer
which implements ImageAnalysis.Analyzer interface. ImageAnalysis.Analyzer has function called analyze(ImageProxy image, int rotationDegrees)
, and this is where we will add QR code detection related code.
Let’s start implementing QrCodeAnalyzer
:
-
Create
QrCodeAnalyzer
and add callback to get notifications when QR code is detected:class QrCodeAnalyzer( private val onQrCodesDetected: (qrCodes: List<FirebaseVisionBarcode>) -> Unit ) : ImageAnalysis.Analyzer { override fun analyze(image: ImageProxy, rotationDegrees: Int) { // ... } }
-
Get instance of FirebaseVisionBarcodeDetector:
val options = FirebaseVisionBarcodeDetectorOptions.Builder() // We want to only detect QR codes. .setBarcodeFormats(FirebaseVisionBarcode.FORMAT_QR_CODE) .build() val detector = FirebaseVision.getInstance().getVisionBarcodeDetector(options)
-
Create FirebaseVisionImage from frame:
val rotation = rotationDegreesToFirebaseRotation(rotationDegrees) val visionImage = FirebaseVisionImage.fromMediaImage(image.image!!, rotation)
In this step we also need to convert ImageAnalysis.Analyzer’s rotation degrees to firebase’s rotiation by adding following function:
private fun rotationDegreesToFirebaseRotation(rotationDegrees: Int): Int { return when (rotationDegrees) { 0 -> FirebaseVisionImageMetadata.ROTATION_0 90 -> FirebaseVisionImageMetadata.ROTATION_90 180 -> FirebaseVisionImageMetadata.ROTATION_180 270 -> FirebaseVisionImageMetadata.ROTATION_270 else -> throw IllegalArgumentException("Not supported") } }
- Pass
visionImage
todetector
and notifyonQrCodesDetected
with list of detected QR codes:detector.detectInImage(visionImage) .addOnSuccessListener { barcodes -> onQrCodesDetected(barcodes) } .addOnFailureListener { Log.e("QrCodeAnalyzer", "something went wrong", it) }
- Use
QrCodeAnalyzer
instartCamera()
function ofMainActivity
:private fun startCamera() { // ... val imageAnalysisConfig = ImageAnalysisConfig.Builder() .build() val imageAnalysis = ImageAnalysis(imageAnalysisConfig) val qrCodeAnalyzer = QrCodeAnalyzer { qrCodes -> qrCodes.forEach { Log.d("MainActivity", "QR Code detected: ${it.rawValue}.") } } imageAnalysis.analyzer = qrCodeAnalyzer // We need to bind preview and imageAnalysis use cases CameraX.bindToLifecycle(this as LifecycleOwner, preview, imageAnalysis) }
Here is how QrCodeAnalyzer class should look like when you follow steps mentioned above.
Now you can run the project and you should be able to see QR Code detected: ...
in logcat when QR code is detected.
Final words
In this tutorial, we learnt creating QR code scanner using Firebase ML Kit and Jetpack CameraX.
You can find the final code for this tutorial on Github.
I hope that you’ve enjoyed this tutorial. If you have any questions or comments, you can ask here.
Share on: