While machine learning might seem fun and exciting from the outside, building a dataset and training the model is as dull as it gets.
However, don’t take this the wrong way—model training is a very critical part of the entire process and shouldn’t be overlooked.
Thankfully, to help democratize machine learning, we have AutoML Vision Edge, which is a tool provided by Google that can help make the process of training a machine learning model much easier.
Using AutoML, you can quickly offload the training process to Google, thereby eliminating the need for a high-end PC with a GPU. The Edge flavor of AutoML then allows you to export that model as a tflite file to run on your Android/iOS apps or as a protobuf file to run the same model in a python environment.
This blog is the first one in a series of blogs that covers training and using TensorFlow models with the help of Google Cloud AutoMl.
Series Pit Stops
- Training a TensorFlow Lite Image Classification model using AutoML Vision Edge (You are here)
- Creating a TensorFlow Lite Object Detection Model using Google Cloud AutoML
- Using Google Cloud AutoML Edge Image Classification Models in Python
- Using Google Cloud AutoML Edge Object Detection Models in Python
- Running TensorFlow Lite Image Classification Models in Python
- Running TensorFlow Lite Object Detection Models in Python
- Optimizing the performance of TensorFlow models for the edge
Before we go ahead and use AutoML, we need to understand the steps involved in training a machine learning model.
Fundamentally, training can be broken down in 3 parts:
- Data collection
- Data cleaning and augmentation
- Model training
Let’s now look at each of these steps in detail.
1. Data Collection
This step comprises collecting the data that you’ll be using to train your model.
For the sake of this blog post, we’ll be training a classification model, hence your dataset will contain different kinds of images that the model has to identify (here, different Pokémon).
The obvious way to build this dataset would be to search for the images on Google and download them manually, but as you rightly assumed, that will take quite a while.
Instead, Python provides us with a handy third-party package called google_images_download, which has easy-to-use syntax and can automate this task for us.
To use this package, you’ll have first to install it using pip. It’s as simple as writing pip install google_images_download.
Once installed, you can use the package to download images for an individual Google search query on your local system.
For example, the following command will download 300 images of Pikachu from Google Images into the “Pikachu” folder.
googleimagesdownload –keywords “pikachu” –limit 300 –o “pikachu”
You can repeat this for all the 151 Pokemon whose images we want to collect. You can also write a simple Python script that loops over their names and calls the command above 😉
2. Data cleaning and augmentation
Once you have the images downloaded, you’ll notice that the dataset isn’t precisely what you need.
For instance, in the image above, you can see that searching for Bulbasaur on Google Images ends up returning some images that aren’t actually the Pokémon, but things like consoles, slippers, action figures, etc.
Before we train our model, it’s essential that we get rid of these images from our dataset.
This is the process that will mostly end up taking a lot of your time since there are tons of irrelevant images that the scraper picks up from your Google search.
Once you’re done cleaning the dataset, you might want to augment it so that the model has more images to work with while you’re training it.
Data augmentation is a strategy that allows us to significantly increase the diversity of data available for training models, without actually collecting new data. Data augmentation techniques such as cropping, padding, and horizontal flipping are commonly used to train large neural networks.
This is an excellent blog post to read if you’re interested in learning more about data augmentation techniques:
3. Training the model
This is the last and most comfortable part of the process (considering that you’re using AutoML Vision Edge). Before we use AutoML Vision Edge, it’s essential that we have structured our dataset correctly.
This is how your dataset should look:
Next, zip the root folder so that you have a single .zip file containing all the photos.
If you want to follow along with the example of Pokémon, you can grab the dataset I used from my Kaggle profile here:
Post this, you’ll need to go to the Firebase console and select/create a project that’s linked/going to be connected with your app.
After the project’s been created, navigate to ML Kit and then AutoML.
Once here, select the “Add Dataset” option and give the dataset a name.
In the next screen, drag and drop your dataset zip file into the upload area and wait for AutoML Vision Edge to process the dataset.
This process might take a while, so feel free to browse Reddit in the meantime :p
Once the import is done, you can press next and go to the “Train a Model” screen.
Over here, feel free to leave the default options selected and begin training your model.
Once training is completed, you can proceed to view the stats of your model and how it performs.
You can also test the model with an actual image of a Pokémon to see if it’s working as expected.
If you aren’t particularly happy with these results, you can either try refining your dataset or retraining the model with more compute hours.
Using the trained model
Once you’re content with the performance of the model, you can click on the “Use Model” button on the console and either download or publish the trained model.
Downloading the model gives you a .tflite file that can be directly packaged into an Android/iOS app for on-device ML inferencing.
However, if you want to reduce the install size of your app, you can also publish the app on the Firebase console, thereby allowing your app to download it automatically when it first starts.
To learn more about how to use the model and part 2 of this blog, you can refer to the blog down below in which I use the trained model to make an Android app that can identify various Pokémon from the provided image!
Here are some images from the app:
The app is also open-sourced and you can find it on GitHub!
Thanks for reading! If you enjoyed this story, please click the 👏 button and share it to help others find it! Feel free to leave a comment 💬 below.
Have feedback? Let’s connect on Twitter.