It’s great that you’re thinking of building your own machine learning-powered mobile app — but there are some key considerations to keep in mind. This article will discuss three things that can keep you on-track to quickly, efficiently, and effectively building your app.
1. Use Pre-Built Models
Using pre-built models when you’re getting started with machine learning on mobile can get you a long way, since you don’t need to spend a whole bunch of time finding a dataset, training your own models, and testing them for accuracy.
However, it’s important that you know how to find high-quality models so you can rely on their accuracy, without needing to worry about inadequate training or overfitted results. After all, why do the work if someone’s already done it?
Before we look at actual models, here are some tools/APIs that might be useful in providing a wide range of services, which might be most convenient to use when your app needs to do many things at once.
And — if you’re looking for a comprehensive list or something that isn’t quite covered in this post, I’d recommend checking out this eternally growing repository of awesome Core ML models.
These are likely the most trustworthy sources, since you’re guaranteed that their models are up-to-date and fully optimized for their respective platforms. Here are some of them:
Machine learning models are quite interesting, especially when you think about how a computer is attempting to mimic the human brain when classifying images — but — what’s cooler is when you can visualize what the computer is seeing. Some of these are the best image segmentation models, which highlight individual parts of an image to bring to your attention:
For more general uses, such as directly classifying images — that is, the entire image—you can use an image classification model. Instead of being able to recognize parts of the image, these sorts of models make general predictions about the entire image. For example, the app may see and predict the class of an image of a residential street, instead of seeing cars, houses, trees, and other individual objects. Some of these include:
We all have a unique face — and luckily so — because Face ID and similar technologies allow our face to be our password. As you may have imagined, these features on our smartphones rely solely on machine learning; as developers ourselves, it’s exciting to know that you can use pre-existing models to implement this cutting-edge technology into your own mobile apps with little-to-no effort up front!
2. Convert Between Model Formats
While you might specialize in iOS development, or other platform-specific development, it’s important to learn how to convert models between formats, since you’ll likely be integrating platforms together in the future.
For example, you might find that the best model for style transfer can only be found as a TensorFlow model, or your hotdog classifier is best found as an ONNX model. You’ll need to convert these models in order to use them in your apps. In addition, more robust models are developed using external tools, such as Keras and Caffe, which don’t export models to Core ML directly.
Since these tools are well-documented and each model type will need a different approach, I won’t be going step-by-step to show you how to use them, but let’s look at an example. Imagine you created a Caffe model because you needed the additional fine-grained control that the platform offers as opposed to Create ML, Apple’s machine learning training tool. Now, since this model isn’t directly compatible with Core ML, you’ll need to convert it before using it.
Since there are dozens of machine learning model formats out there, there are several ways to convert models to your desired platform. Since Android is more flexible in terms of natively using a variety of models, iOS is more of a concern in terms of conversion.
Not to worry, though — there are still great tools to convert various models into Core ML format. And, as always, you can still use them in the cloud or through a third-party API if your specific model isn’t supported.
Apple’s documentation goes into great detail about converting models you already have trained with other frameworks to their platform, Core ML. There are also third party tools you can include, such as the ones listed on this documentation page:
- MXNet Converter for MXNet to Core ML conversion
- TensorFlow Converter for TensorFlow to Core ML conversion
And for model formats such as Caffe by Berkeley’s Artificial Intelligence Research (BAIR) lab, coremltools has a whole bunch of converters built straight into it, which can easily covert via a quick Python script.
3. Focus on Native Development
While cross-platform apps seem attractive at first, they’re more likely to cause issues later on — particularly when developing performance-driven apps. With technologies like machine learning, efficiency is key when providing services to your user; mobile development has more user-oriented aspects to consider than other platforms like web and backend apps. You wouldn’t want your app using up half of a cellular data plan, taking up too much storage space, or chewing through battery.
iOS and Android apps are typically the most common, so let’s delve a little deeper and take a look at how you can create your own native apps for each of these platforms.
Xcode and Swift are considered the “official” tools for building native apps for Apple’s platforms (iOS, macOS, tvOS, and watchOS). Prior to the introduction of Swift, Objective-C was used more commonly, and in some cases, is still used in conjunction with Swift. It’s important to note that it’s still possible to create apps solely in Objective-C.
You can download Xcode and get up and running pretty quickly with the wealth of free resources online. For starters, you can check out some of my other work to get your foot in the door:
Once you’re up and running, and you have a good grasp on Swift, you can begin training (or downloading) Core ML models to use with your app. An interesting feature of Core ML is that it generates a Swift wrapper class for you, allowing you to access your machine learning model as if it were a class you’d written. For more on learning how to actually implement these models into your apps, check out some of these resources:
- Integrating a Core ML Model into Your App
- Using Core ML and Vision in iOS for Age Detection
- Get Started With Image Recognition in Core ML
These are in addition to the hundreds (or even thousands) of other tutorials and guides on the internet that help with this topic. If you’re looking for more, a quick Google search should bring up many more that aren’t listed above. And, as always, never hesitate to leave a comment below. I’ll be able to personally help you out if need be.
Android Studio and Java are the Xcode and Swift counterparts for Android development. Given that, at the end of the day, Android and iOS are both mobile platforms, and coding for them is very similar at a high-level. And, if you already have experience with other mobile development platforms, the learning curve is easier to overcome when switching platforms.
To begin building Android apps, download Android Studio, and follow the official guide, which can ease you into the process. Alternatively, you can seek online resources as well.
When you’ve setup your first app, you can begin adding machine learning features via Android’s native tools. If you don’t want to train your own model, you can get pre-built ones from ML Kit and Google Cloud.
In this article, you learned how you can make the most of your first machine learning app and tips which can help you steer clear of common pitfalls. After you build something cool, feel free to share it in the comments below, and until then, keep coding!
It’s easy to support my work!
Be sure to smash that “clap” button as many times as you can, share this tutorial on social media, and follow me on Twitter.