Implementing a Fritz AI Machine Learning Model in an iOS app

Image Labeling on iOS: Co-authored by Asritha Bodepudi and Vidushi Meel

Fritz AI allows developers to use mobile-optimized machine learning (ML) algorithms to create custom ML models (without code) for use in their apps.

In this article, we’ll walk you through the process of creating an image labeling ML model that can identify different car logos, and then integrating the model into an iOS app.

How to Train an ML Model using Fritz AI Studio

The Fritz AI build system begins with creating an account and signing up for a plan that works best for you.

At the time of writing, Fritz AI offers a “Sandbox” option, which is free to the user until they require more than five training hours (monthly global limit). First, make an account by navigating to the Fritz AI website, clicking “sign up”, and selecting “Sandbox” for the free option.

Training a machine learning model with a pre-programmed platform like Fritz AI makes machine learning easy for people who are just starting out with machine learning or don’t know how to code. Alternatively, it’s a good way for companies to allow multiple people to work on ML models if they want uniform ML models that are all created in the same way. This platform also ensures accuracy and limits the margin of error for creating a custom ML model specifically for mobile deployment.

Step 1: Create a Project in Fritz AI Studio

We’re going to select the “custom” option and train our model to identify different vehicle manufacturer logos. Given that pre-trained models will most likely not exist for niche datasets (such as this one), we’ll walk you through the steps of labeling your images to train a custom model.

Why did we choose image labeling?

Image labeling is used for the purpose of identifying and labeling images with their class names. Object detection can identify and track objects within an image or video. Pose estimation is useful for tracking the position that a human or object is assuming in an image, rather than its identification. As we mentioned earlier, we wish to simply distinguish different vehicle logos from each other, so we’re choosing image labeling.

Step 2: Create a Custom Dataset

To create a custom dataset, we’re going to identify our classes and create three folders on our local machine, labeled with the corresponding names within a main folder.

The three classes we’re opting to distinguish between are BMW, Mitsubishi, and Toyota. Therefore, we’ve created three folders that will house each of their respective images.

We’ll start with a collection of “seed images” for each label. These will be used to synthetically generate a ready-to-train dataset Snapshot. These seed images must be in PNG format with transparent backgrounds, as specified by Fritz AI (use or as free tools to remove the background of an image).

We are using ten to twelve images per sub class. Each folder should look something like this:

Try to find a variety of images with different colors, cropping, and exposure for optimal accuracy.

Step 3: Create Labels

Drag and drop the folders of images into Fritz AI Studio. The Studio should generate previews and display the following screen:

To create the labels, click the “manage labels” button on the left of any selected image. The labels should be the same as the names of your classes (i.e. “toyota”, “bmw”, “mitsubishi”). Creating your labels is a one-time process.

Step 4: Annotating images

For each image, press the hotkey that corresponds with the label of the image, and then ‘E’ to save the annotation. Continue this process until each image has been annotated.

These few “seed images” will be used to generate a more complete dataset programmatically of thousands of labeled images.

Step 5: Generate snapshot and train!

All that’s left to do is to configure and programmatically generate a dataset Snapshot and then configure a training job. To configure a Snapshot, proceed with the default amount of images suggested by the Studio and hit “Next”. This generation process will take about ten minutes, depending on the amount of images you are using.

Once the Snapshot is generated, click on the “Train” icon in the left sidebar and press “+ Train New Model” in the top right corner.

Make sure you use the snapshot we just created as the dataset for the training job; the model type should be image labeling. The higher the training budget, the more accurate the model will be, but you can also go with the recommended number.

Training itself will take anywhere from an hour to three hours, but you will be emailed once it’s done.

Registering an App using Fritz AI Studio

Step 1: Create an Xcode project

Create a brand new Xcode project as a single-view application with the Storyboard interface.

Step 2: Register New App in Fritz AI Studio

Navigate to Project Settings in your Fritz AI project, and then click on the “register app” button near the bottom of the page. Select iOS as the platform and provide the app name and bundle ID (Project →Targets → Identity).

Step 3: Install the SDK via CocoaPods

Locate your Xcode project in Finder (File → Show in Finder), and then close out of it.

In your terminal, cd into your project directory.

In order to initialize and open a new Podfile, type $pod init; open Podfile -a Xcode.

If you get an error saying “No such module ‘Fritz’”, you might want to try uncommenting platform :ios, ‘10.0’ within the podfile, and change the global platform from ‘9.0’ to ‘10.0.’ Underneath the #Pods for, type pod ‘Fritz’.

Save the podfile and close it, then type pod install into a new line in terminal.

Your final terminal window should look something like this if everything has gone smoothly:

Open the .xcworkspace project. The .xcworkspace file has a white background with a blue icon drawn on it, while the .xcodeproj file has a blue background with a white icon drawing. Because we’re opening the .xcworkspace, be sure to select the white-looking file.

The .xcworkspace project file is different from the .xcodeproj file in that one can process the cocoapods we recently implemented while the other cannot. If you get errors immediately after opening the project, double check to make sure you’re in the .xcworkspace white file.

All that’s left to do to finish the project setup is to verify it in you Fritz AI account page by clicking next in the “step 4: verify setup” view as shown below.

Integrating the ML Model into an Xcode Project

Now that our ML model has been created, we want to implement it into our Xcode project so that it can be used in an iOS app.

First, download the active version of your model in Core ML format (iOS model type and framework for on-device ML), and drag it into your Xcode project.

Drag the ML model into the yellow folder under the blue file icon. Make sure it imported properly by selecting it. If you’re able to see the details below, the import was successful!

Inside your Main.storyboard file, drag in a UI Image View, UI Button, and UI Label. Position and customize these however you wish, but try not to overlap fields.

  • The UI Image View will display whatever image the user imports, and a blank gray screen in case they don’t upload an image.
  • The UI Button will prompt the user with a UIAlertView to choose an image from Photos, or take one with the camera.
  • The UI Label displays the prediction made by your ML model.

Next, open the assistant editor to write some code! You can open the assistant editor by clicking on the three lines at the top right of Xcode and selecting “assistant”.

Make sure to type import Fritz at the top of the code under the default commented out lines made by Xcode (the grey lines of writing). This allows our imported Fritz AI cocoapods to have a library of corresponding Fritz AI code to refer to.

Step 1: Create IBOutlets for the Image View & Label. Create an IBAction for the Button:

@IBOutlet weak var imageView: UIImageView!
@IBOutlet weak var predictionLabel: UILabel!
@IBAction func addPhotoButtonPressed(_ sender: UIButton) {}

Step 2: Create an object of the ImagePickerController class. You also must set your ViewController class as UIImagePickerControllerDelegate and a UINavigationControllerDelegate. You can then assign the imagePicker’s delegate as self in viewDidLoad().

class ViewController: UIViewController, UIImagePickerControllerDelegate & UINavigationControllerDelegate {
  let imagePicker = UIImagePickerController()
  override func viewDidLoad() {
    imagePicker.allowsEditing = true
    imagePicker.delegate = self 

Step 3: Display an alert prompting the user to choose between importing a photo from their camera roll or taking a picture. We will write the code for this in the addPhotoButtonPressed IBAction. Depending on which option the user chooses, a View Controller will pop up with the camera or photo library.

@IBAction func addButtonPressed(_ sender: UIButton) {
  let alert = UIAlertController(title: "Upload image of car logo", message: "", preferredStyle: UIAlertController.Style.alert)
  let takeWithCamera = UIAlertAction(title: "Take With Camera", style: .default) { (action) in
    self.imagePicker.sourceType = .camera
    self.present(self.imagePicker, animated: true)
  let importFromPhotos = UIAlertAction(title: "Pick From Photo Album", style: .default) { (action) in
    self.imagePicker.sourceType = .photoLibrary
    self.present(self.imagePicker, animated: true)
self.present(alert, animated: true, completion: nil)

Step 4: Handle the image once the user imports it into the app with the camera or photo library. Then set it equal to your Image View’s image.

func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
  if imagePicker.sourceType == .camera{
    picker.dismiss(animated: true)
    guard let image = info[.editedImage] as? UIImage else {
      print("No image found")
    return }
    imageView.image = image }
 else if imagePicker.sourceType == .photoLibrary{
    var newImage: UIImage
    if let possibleImage = info[.editedImage] as? UIImage {
      newImage = possibleImage
    else if let possibleImage = info[.originalImage] as? UIImage {
      newImage = possibleImage
    else {
     return }
dismiss(animated: true)
imageView.image = newImage

Step 5: Connect the model with the user-inputted image so it identifies the car logo depicted. Let’s create a identifyCarLogo() method in which we initialize the ML model and request a label for the image.

func identifyCarLogo(){
  let image = CIImage(image: (imageView.image!))
  guard let model = try? VNCoreMLModel(for: HeartbeatArticleDemoFast().model) else {
    fatalError("Loading CoreML Model Failed.")
  let request = VNCoreMLRequest(model: model) { (request, error) in
  guard let results = request.results as? [VNClassificationObservation] else {
    fatalError("Model failed to process image.") }
  if let firstResult = results.first {
    self.predictionLabel.text = firstResult.identifier.capitalized } }
  let handler = VNImageRequestHandler(ciImage: image!)
  do {
  try handler.perform([request]) }
  catch {
  print(error) } 

We need to call identifyCarLogo() (in our function from step 4) underneath where we assign the user-inputted image to the Image View’s image. The label on screen should display the class label predicted by the model.

imageView.image = newImage


We have successfully trained an ML model using Fritz AI and integrated it within an Xcode project. You can use the exact same process to train any type of model that Fritz AI supports and create an iOS app that can “see” the world around it in more vivid detail.

Combining machine learning and mobile app development can provide a more tangible experience for users. Platforms such as Fritz AI remove the difficulty of creating your own ML models from scratch with their pre-trained algorithms and no-code model building workflow.

Given the wide range of ML applicability, Fritz AI allows developers to train an ML model for their custom needs; its dataset generator creates thousands of images using only a few starting images to maximize accuracy. And people who have never coded before can navigate Fritz AI Studio with its simple drag and drop interface.

Beyond the scope of this article (image labeling), Fritz AI Studio also offers image segmentation, object detection, and pose estimation, and developers can integrate their models on Android as well.


Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square