Xbox or PS5 Enthusiast? — Create an AI-Powered Snapchat Lens with Fritz AI Studio

Leverage Fritz AI to quickly generate a dataset, train an image labeling model, and deploy directly to Lens Studio


The Playstation 5 and Xbox Series X are coming out just in time for the holidays. If Sony fans and Microsoft fans have already made their choice, some are still hesitating. Both Sony and Microsoft offer next-gen consoles with exclusive games and subscription-based services.

So I thought, in honor of the next generation of gaming…why not create a Snapchat Lens that will classify either console and change the camera overlay accordingly?

If you’ve never heard of Snapchat’s Lens Studio, you can read my primers on both Fritz AI Studio and Lens Studio:

In this article, I will show you how easily you can create a custom machine learning-powered Lens with almost no code. Then, we’ll create a Lens that will recognize and classify next-generation consoles (Xbox Series X and PS5).

Fritz AI Studio

First, let’s work through building a Lens Studio-ready model with Fritz AI Studio.

  • Seed images: Download images containing the Xbox Series X and PS5, preferably with different angles and light conditions— Unfortunately, since the two consoles haven’t launched yet, you will only find generic press images. I managed to find around 15 good seed images for each class. We’ll use these “seed” images to generate a trainable dataset snapshot.
  • Remove the background: Plenty of services propose a way to remove image backgrounds for free, such as or If you don’t want to use these services, Photoshop can be used, as well as Preview (macOS only) or GIMP. Fritz AI studio will use these images to overly them on thousands of random images.
  • Annotate: When all the images are ready, go to Datasets -> Add Image Collection -> Upload images. When all images are uploaded, a whole new menu at the top will appear with an image annotation interface. The process is pretty simple and straightforward, especially for image labeling (classification) models. You start by creating new classes, with each one having a different color. In my case, I have two classes (xbox, ps5)—it took only few seconds using the keyboard shortcuts. You can also use this annotation workflow for other ML tasks, including object detection and image segmentation.
  • Generate a snapshot: This is where all the magic happens. With your labeled seed images, Fritz AI will create synthetic images based on your original images, with some sort of data augmentation built-in. You can also monitor how many images have been generated. You will receive an email when the process is finished.
  • Train a classification model: When the snapshot is ready, you can start the training job by selecting your snapshot and choosing the number of hours for the training budget — note that Fritz AI will send you an email when the training process is finished. It will also stop the training if the model converges before the assigned training hours.

Design the images

I am not a designer by any means—thus I’ve used Canvas to create two very simple images that will appear at the bottom of the screen to signify that you are an Xbox fan or a PS5 fan.

Lens Studio

Now that we have our ML model, we need to import it into Lens Studio and create our Lens.

  • Open the project: Open the .lsproj file from the project zip file provided by Fritz AI Studio. A prompt will pop up — just click on import.
  • Import the model as an ML Component: In the left Objects panel of Lens Studio, you will find an object called ML Component. Click on it and import the model from the right panel (left image in Figure 10). Since the model file is already in the project structure, Lens Studio is able to recognize it. I highly recommend changing the threshold (model’s prediction confidence) to something higher than 0.5 in order to avoid false positives — I chose to set it to 0.8. The model’s threshold can be found in the Classification Controller file (Right image in Figure 10).
  • Change the input Texture: In the ML Component file, change the input texture to Textures > Device Camera Texture. At this point, the Lens can classify consoles.
  • Change the script: In the left Resources panel, you can find a folder called Scripts > Classification Helpers containing all the .js files. We will change the ClassificationExampleHelper file with the following code:
//@ui {"widget":"separator"}
// @input Component.Text text
// @input Component.Image xboxImage
// @input Component.Image ps5Image
// @input vec4 ps5Color {"widget":"color"}
// @input vec4 xboxColor {"widget":"color"}

var classLabels = ["Nothing", "Xbox enthousiast", "PS5 enthousiast"]

if(!script.text) {
    debugPrint("Warning, Text component is not set");

script.api.onFound = function(classIndex) {
    script.text.text = classLabels[classIndex]
    if (classIndex == 0) {
        script.ps5Image.enabled = false;
        script.xboxImage.enabled = false;
    } else if (classIndex == 1) {
        script.xboxImage.enabled = true;
        script.text.textFill.color = script.xboxColor
    } else if (classIndex == 2) {
        script.ps5Image.enabled = true;
        script.text.textFill.color = script.ps5Color

script.api.onLost = function() {
    script.text.text = "";
    script.ps5Image.enabled = false;
    script.xboxImage.enabled = false;

function debugPrint(text) {
    print("ClassificationExampleHelper: " + text);
  • Add the images and set up the text colors: By setting the inputs in the script above, Lens Studio will automatically add a menu where you can upload the Xbox and PS5 banners and set the custom colors for each console.


While the model was good, I did notice that it has issues classifying PS5, perhaps because it’s hard to differentiate from a white background—but that’s purely speculative since I don’t have much information about the training metrics.

I decided to add more images from MKBHD’s video of the unboxing of the PS5 and create a much bigger Snapshot of 6,000 images rather than 3,200 images. I also trained the model from the first existing Keras checkpoint, which is basically the first iteration.

The model is now much better at classifying the consoles. I have also noticed that when you keep the camera open for more than 30 seconds, the frames drop and the phone (tested on an iPhone X) gets very warm.

The project is nowhere near ready to be used by end-users, there are a lot of things that could be added to improve the whole experience in terms of design, or even other elements like music. The possibilities are endless and are likely easy to implement for Snapchat Lens Creators. There is also room to make it even more interesting by adding a class label PC gaming enthusiasts as well!

Avatar photo


Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *