Build a Cat-or-Dog Classification Flutter App with TensorFlow Lite

Using a pre-trained classification TensorFlow Lite model to build an ML-powered Flutter app

Object detection, image classification, gesture recognition—these computer vision tasks are all hot topics in today’s machine learning landscape. There are many applications today that leverage these technologies to provide efficient and optimized solutions. And increasingly, these technologies are finding their way into mobile applications.

This tutorial aims to deliver one such demonstrative application, using the TensorFlow machine learning library in a Flutter project to perform binary image classification—cats vs dogs, a fundamental use case.

To do this, we’ll need a pre-trained classification model intended for mobile use. If you’d prefer, you can also train your own model using Teachable Machine, a no-code model building service offered by TensorFlow.

Using the TensorFlow Lite library will help us load as well as apply the model for image classification on mobile. Image classification is a computer vision task that works to identify and categorize various elements of images and/or videos. Image classification models are trained to take an image as input and output one or more labels describing the image.

The main idea is to make use of the TensorFlow Lite plugin and classify an image of an animal and classify whether it’s a dog or a cat. Along the way, we will also make use of the Image Picker library to fetch images from the device gallery or storage. The main process will be to load our pre-trained cat/dog model using the TensorFlow Lite library and classify the test animal image based on it.

So, let’s get started!

Create a new Flutter project

First, we need to create a new Flutter project. For that, make sure that the Flutter SDK and other Flutter app development-related requirements are properly installed. If everything is properly set up, then in order to create a project, we can simply run the following command in the desired local directory:

After the project has been set up, we can navigate inside the project directory and execute the following command in the terminal to run the project in either an available emulator or an actual device:

After successfully building, we will get the following result in the emulator screen:

Creating an Image View on Screen

Here, we are going to implement the UI to fetch an image from the device library and display it on the app screen. For fetching the image from the gallery, we’re going to make use of the Image Picker library. This library offers modules to fetch image and video sources from the device camera, gallery, etc.

First, we need to install the image_picker library. For that, we need to copy the text provided in the following code snippet and paste it into the pubspec.yaml file of our project:

Now, we need to import the necessary packages in the main.dart file of our project:

In main.dart file, we will have the MyHomePage stateful widget class. In this class object, we need to initialize a constant to store the image file once fetched. Here, we are going to do that using the _imageFile File type variable:

Now, we are going to implement the UI, which will enable users to pick and display an image. The UI will have an image view section and a button that allows users to pick the image from the gallery. The overall UI template is provided in the code snippet below:

Here, we’ve used a Container widget with a card-like style for the image display. We have used conditional rendering to display a placeholder image until the actual image is selected and loaded to the display. We’ve also used a RaisedButton widget to render a button just below the image view section.

Hence, we should get the result as shown in the emulator screenshot below:

Function to fetch and display the image

Next, we’re going to implement a function that enables users to open the gallery, select an image, and then show the image in the image view section. The overall implementation of the function is provided in the code snippet below:

Here, we have initialized the ImagePicker instance and used the getImage method provided by it to fetch the image from the gallery to the image variable. Then, we set the _imageFile state to the result of the fetched image using the setState method. This will cause the main build method to re-render and show the image on to the screen.

Next, we need to call the selectImage function in the onPressed property of the RaisedButton widget, as shown in the code snippet below:

Hence, we will get the result as shown in the emulator screenshot below:

As we can see, as soon as we select the image from the gallery, the selected image is shown on the screen instead of the placeholder image.

Performing Image Classification with TensorFlow Lite

Now, it’s time to configure our cat and dog image classification pipeline. Remember, our goal is to classify a given image of an animal as a cat or a dog a dog or a cat. For that, we are going to use a model trained using TensorFlow’s Teachable Machine.

If you’d like, you can also give training you own model with Teachable Machine a shot. This model we’re using for this tutorial offers the trained images of cats and dogs of various breeds along with label tags.

Once downloaded, we will get two files:

  • catdog_model.tflite
  • cat_dog_labels.txt

The labels file here can only distinguish between a cat and a dog. This is done for quick testing purposes. You can add more labels to identify in terms of breeds, other animals, etc.

We need to move the two files provided to the ./assets folder in the main project directory.

Then, we need to enable the access to assets files in pubspec.yaml:

Installing TensorFlow lite

Here, we are going to install the TensorFlow Lite package. It’s a Flutter plugin for accessing TensorFlow Lite APIs. This library supports image classification, object detection, Pix2Pix and Deeplab, and PoseNet on both iOS and Android platforms.

In order to install the plugin, we need to add the following line to the pubspec.yaml file of our project:

For Android, we need to add the following setting to the android object of the ./android/app/build.gradle file:

Here, we need to check to see if the app builds properly by executing a flutter run command.

If an error occurs, we may need to increase the minimum SDK version to ≥19 in ./android/app/build.gradle file for the tflite plugin to work.

Once the app builds properly, we’ll be ready to use the TensorFlow Lite package in our Flutter project.

Using TensorFlow Lite for Image Classification

First, we need to import the package into our main.dart file, as shown in the code snippet below:

Loading the Model

Now, we need to load the model files into the app. For that, we’re going to configure a function called loadImageModel. Then, by making use of the loadModel method provided by the Tflite instance, we’re going to load our model files in the assets folder to our app. We need to set the model and labels parameter inside the loadModel method, as shown in the code snippet below:

Next, we need to call the function inside the initState method so that the function triggers as soon as we enter the screen:

Perform Image classification

Now, we are going to write code to actually perform image classification. First, we need to initialize a variable to store the result of the classification:

This _classifiedResult List type variable will store the result of the model’s classification.

Next, we need to devise a function called classifyImage that takes an image file as a parameter. The overall implementation of the function is provided in the code snippet below:

Here, we have used the runModelOnImage method provided by the Tflite instance to classify the selected image. As parameters, we have passed the image path, result quantity, classification threshold, and other optional configurations for better classification. After the successful classification, we have set the result to the _classfiedResult list.

Now we need to call the function inside the selectImage function and pass the image file as a parameter:

This will allow us to set the image to the image view as well as classify the image as soon as we select an image from the gallery.

Now, we need to configure the UI template to display the results of the classification. We are going to show the result of classification in card style as a list just below the RaisedButton widget.

The implementation of the overall UI of the screen is provided in the code snippet below:

Here, just below the RaisedButton widget, we have applied the SingleChildScrollView widget so that the content inside it is scrollable. We’ve also used the Column widget to list out the widgets inside it horizontally.

Inside the Column widget, we’ve mapped through the result of the classification using the map method and displayed the result in the percentage format inside the Card widget.

Hence, we will get the result as shown in the demo below:

We can see that as soon as we select the image from the gallery, the classification result is displayed on the screen as well—the on-device nature of the model allows predictions to happen in real-time.

And that’s it! We have successfully implemented our cat-or-dog classifier in a Flutter app using TensorFlow Lite.

Conclusion

In this tutorial, we were able to build a demo app that correctly classifies images of cats and dogs image.

The overall process was simplified and made easy due to the availability of the TensorFlow Lite library for Flutter, as well as a pre-trained model. The model files were made available for this tutorial, but, you can create your own trained models using Teachable Machine, a no-code service provided by TensorFlow.

Now, the challenge can be to train your own model and load it into the Flutter app, and then apply the model to classify images. The TensorFlow Lite library is capable of other machine learning processes like object detection, pose estimate, gesture detection, etc.

All code available on GitHub.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square