Recently, Snapchat released a new feature where AI developers can “easily” integrate their own custom models—one of which is the ability to classify single objects in order to trigger fun and immersive AR effects. We’ll cover this ML task in this article.
Lens Studio, an augmented reality creativity suite released by Snapchat in 2017, gives users the opportunity to create their own augmented reality (AR) Lenses.
But this software is not primarily intended for ordinary Snapchat fans. Rather, it’s built for designers or even developers and artists who wish to work with a powerful AR creation tool, with the aim of providing the Snapchat community with ever more immersive and fun Lenses.
Lenses are a big part of the user’s experience inside Snapchat, used either in stories or in a peer-to-peer fashion, whether to communicate with friends or to simply have fun.
Lens Studio is therefore full-fledged creativity software that allows users to create and save graphic projects using templates and guides designed to inspire.
I can’t express how easy it is to use, especially when you start with fully-featured templates that can get you started in just a few seconds.
We should also add that the social networking platform said at the launch of Lens Studio that it had partnered with several development agencies specializing in augmented reality to help advertisers create lenses capable of leveraging users’ facial and body attributes. This should help the US firm open up to more brands, as they will no longer have to rely solely on creations designed in-house by Snapchat.
In this article, I will show you how easily you can create a custom machine learning-powered Lens with almost no code. As a proof of concept, I chose to create a Lens that will recognize and classify Bulldogs.
The idea is pretty simple—you open the Lens and start taking videos or photos of any Bulldog, and the Lens will classify the object and show us a text, activate confetti, and initiate a dog barking audio clip.
This is a look at the final result:
Fritz AI Studio
In order to create custom ML models fit for Lens Studio, you can either conform your own models to work on Snapchat’s platform, or use the training notebook provided by Snapchat to train new models.
But I chose to use Fritz AI Studio (support for Lens Studio, at the time of writing, is in Beta and free to try!) because it offers an end-to-end solution, from dataset generation to training a model compatible with Lens Studio requirements.
For a closer look at building a model from end-to-end with Fritz AI Studio, check out my previous tutorial on the same:
- Seed images: Download images containing bulldogs, preferably different colors and sizes for better results — I used 40 images. We’ll use these “seed” images to generate a trainable dataset snapshot.
- Remove the background: Remove the background of those seed images—Fritz AI’s data generation feature uses those images to overlay them on hundreds of random images.
- Annotate: Import your transparent images and start annotating by creating an object of type “Image Labeling”
- Create a Snapshot: Use your annotated seed images to create a Snapshot of thousands of images that will be used for model training — I chose 5,000. It’s possible to monitor the progress of the dataset generation process. It took less than four minutes to generate all the images…impressive!
- Train: When the Snapshot is ready, you can start the training job by selecting your Snapshot and choosing the number of hours for the training budget — note that Fritz AI will send you an email when the training process is finished.
- Download the project template: When you receive an email from Fritz AI confirming that your model has finished training, you can go to “Models” section in the left menu, where you should be able to download either the model or even a ready-to-use Lens Studio template — I chose the latter.
At this moment, you can close Fritz AI and open Lens Studio to start experimenting with it!
You can download Lens Studio from the following page:
- Open the project: Open the .lsproj file from the project zip file provided by Fritz Studio. Aprompt will pop up — just click on import.
- Import the model in an ML Component: In the left Objects panel, you will find an object called ML Component. Click on it and import the model from the right panel (left image in Figure 10). Since the model file is already in the project structure, Lens Studio is able to recognize it. I highly recommend changing the threshold (model’s prediction confidence) to something higher than 0.5 in order to avoid false positives — I chose to set it to 0.8. The model’s threshold can be found in the Classification Controller file (Right image in Figure 10).
- Change the input Texture: In the ML Component file, change the input texture to Textures > Device Camera Texture. At this point, the Lens can already classify Bulldogs.
- Change the script: In the left Resources panel, you can find a folder called Scripts > Classification Helpers containing all the .js files. We will change the ClassificationExampleHelper file with the following code:
You can read Snapchat’s documentation on scripting to understand how it works.
- Add the audio file: First, you need to go to Orthographic Camera > Simple Text Example and add a new component of type Audio, where you’ll need to add the audio file “dog barking”. Then, add in the script part from the same panel in the audio component.
The audio file is copyright free and can be download using the following link:
Publishing Snap Lenses is very easy and convenient — here’s a quick look at the process:
- Find a short catchy title: You are limited to 18 characters, so make it short, simple, and descriptive in regards to your Snapchat Lens.
- Make a short video for the Lens preview: When users are looking for filters and Lenses, they can see a preview of what the Lens is capable of. It has to be a 9:16 aspect ratio video.
- Send for approval: Just before you can send for approval, you will need to specify tags (keywords used to easily find your Lens) and scan triggers.
I am not a big Snapchat user myself, but with the release of SnapML, they got my attention. The platform is so powerful because it’s so easy to work with! I do think that Lens Studio itself needs to be more optimized—but SnapML is in its early days, and I hear they’re working on it.
This project would have been much harder if I didn’t use the Fritz AI data generation feature (which I’m in love with). Even if Snapchat offers an easy-to-use Jupyter notebook that I could run on Google Colab, I don’t have the resources to create/collect thousands of images needed in the training process.
By using Fritz AI, I was able to quickly overcome this crucial step. I do think the annotation process could be improved in the case of classification, because I had just one class and could have selected multiple images and assigned a class object to them, but I had to annotate each and every image.
The Snapchat publishing process is great and makes you forget the App Store or Play Store’s submission process. The Lens was accepted and live in less than 30 minutes! Nothing comparable I know, but still impressive nonetheless.
Given that this project is just a proof-of-concept to show how easily you can create and publish custom ML Lenses, imagine just how powerful this feature could be for designers already familiar with Lens Studio! With Fritz AI’s integration with SnapML, they have the possibility to train powerful computer vision models and quickly return to Lens Studio—their core creativity ecosystem.