Recently, I wrote two articles about training object detection Core ML models for iOS devices using TensorFlow and Turi Create frameworks.
To train those models, I used a tool I’d built called MakeML. It allows you to easily create a dataset, label it, and start training. There’s no need to write code with MakeML, so every iOS developer can train an object detection machine learning model in a couple of hours. On paper, at least…
During the process of creating MakeML, and after talking to a bunch of users, I realized that 3 major bottlenecks stood in the way for developers when creating and integrating object detection into their apps:
- Collecting and processing data to create a dataset.
- Setting up a training pipeline to train a model and then receive a model type that’s ready to run in their apps (e.g. Core ML or TF Lite).
- Understanding possible use cases in production apps.
Let’s take a closer look at each of these bottlenecks.
Collecting and processing data
As we know, to create a decent neural network, we need a good dataset. When it comes to the specific task of object detection, this means that you need to have at least a couple thousand annotated images. For an individual iOS developer whose time is valuable, attempting to collect and manually annotate this amount of images can definitely be a frustrating experience.
Setting up a training pipeline
The idea of MakeML was born when I was struggling to train my first object detection model. In fact, it took me more than 2 weeks of full time work to create a proof-of-concept model that I could ship to my iOS app. For testing a state-of-the-art model architecture, it’s a very high threshold.
Understanding possible use cases in production apps
We’ve been sending emails to iOS developers to introduce MakeML and showcase how it can be utilized. The emails that mention and describe use cases of popular AI apps already in the App Store have 10x higher reply rate than emails just mentioning the ability to train a model. I think this is because the majority of iOS developers currently don’t understand how they can use machine learning in their apps.
After solving the second bottleneck— currently you can start training a neural network in a few clicks using MakeML—it’s clear we need to somehow deal with the other two.
A couple of weeks ago I had a conversation with a company that has an R&D computer vision department. I wanted to understand if we can somehow integrate MakeML into their workflow, but they told me they already had a collection of advanced tools built for enterprise companies to annotate and train their neural networks for them. We cannot help them, but they helped us a lot with an idea for a feature that I want to share with you today.
Automated Video Annotation with MakeML
MakeML’s Automated Video Annotation Tool allows you to annotate objects in the first frame of a video, and then an object tracking algorithm will track an object in a video, and you, for example, can receive a dataset of a hundred annotated images from a short 4-second video in a couple of seconds.
This object tracking algorithm was created for video post-production in order to split moving objects from the background. It compares the current frame and the previous frame to understand if the selected object has moved relative to the background.
Let me show you how this works in MakeML. To use the Automated Video Annotation Tool, you’ll need to:
- Create a project and press the Import Video button
- Select a video from your hard drive
- Drag over your object
- Specify a class this object is representing to create an annotation
- Press the import button
And voila — your object is tracked in the video, automatically annotated, and added to your dataset!
I hope this tool can save a lot of time for iOS developers, who are oftentimes trying to collect their datasets and train their object detection neural networks for the first time 🙂
And, of course, don’t hesitate to write to us directly with any questions, issues, or proposals. We’ll be happy to help you. Our mail: [email protected].
Comments 0 Responses