Anatomy of a High-Performance Convolution


On my not-too-shabby laptop CPU, I can run most common CNN models in (at most) 10–100 milliseconds with libraries like TensorFlow.

In 2019, even a smartphone can run “heavy” CNN models (like ResNet) in less than half a second. So imagine my surprise when I timed my own simple implementation of a convolution layer and found that it took over 2 seconds for a single layer!

It’s no surprise that modern deep learning libraries have production-level, highly-optimized implementations of most operations.

Continue reading “Anatomy of a High-Performance Convolution”

Building an Image Recognition Model for Mobile using Depthwise Convolutions


Deep Learning algorithms are excellent at solving very complex problems, including Image Recognition, Object Detection, Language Translation, Speech Recognition, and Synthesis, and include many more applications, such as Generative Models.

However, deep learning is extremely compute intensive—it’s generally only viable through acceleration by powerful general-purpose GPUs, especially from Nvidia.

Continue reading “Building an Image Recognition Model for Mobile using Depthwise Convolutions”

Convolutional Neural Networks (CNNs): Core Concepts Applied


In this tutorial, we’ll work through the core concepts of convolutional neural networks (CNNs). To do this, we’ll use a common dataset — the MNIST dataset—and a standard deep learning task—image classification

The goal here is to walk through an example that will illustrate the processes involved in building a convolutional neural network. The skills you will learn here can easily be transferred to a separate dataset.

Continue reading “Convolutional Neural Networks (CNNs): Core Concepts Applied”

Exploring SnapML: Working with Custom Neural Networks in Lens Studio


The promise of being able to drop your own custom neural networks into Lens Studio as Lenses, which can then be deployed to millions of devices, is potentially game-changing.

But working with a tool this powerful and versatile inevitably involves some nuances you’ll need to consider while building.

While working through a demo project (stay tuned for a tutorial soon), I was able to identify some key areas where working with custom ML models required some tweaking, fine-tuning, and adaptation.

Continue reading “Exploring SnapML: Working with Custom Neural Networks in Lens Studio”

“Just Point It”: Machine Learning on iOS with Pose Estimation + OCR Using Core ML and ML Kit


Imagine you have to read a document that’s very dense and has numerous words you don’t know the meanings of. What would you do?

The answer seems obvious—get out your phone, open a search engine or online dictionary, and search for the word’s meaning.

Instead of typing, what if you could instead find out all you needed to know about a word, displayed on your smartphone, just by pointing at the word on the document?

Continue reading ““Just Point It”: Machine Learning on iOS with Pose Estimation + OCR Using Core ML and ML Kit”

FastAI Sentiment Analysis


Sentiment analysis refers to the use of natural language processing, text analysis, computational linguistics, and other techniques to identify and quantify the sentiment (i.e. positive, neutral, or negative) of text or audio data.

Because it’s really hard for a model to learn language when only provided with a single value — the sentiment — FastAI lets you first train a language model — a model that predicts the next word — and then use that encoder for the model that actually classifies the sentiment.

Continue reading “FastAI Sentiment Analysis”

Leveraging AI with Location Data in Mobile Apps


Smartphones are ideal devices for machine learning because of the sheer number of sensors they have. Combining data from multiple sensors at the same time can allow developers to make more accurate and quicker predictions inside their apps.

Today almost every smartphone comes with location sensors that provides user’s geolocation with high accuracy.

Continue reading “Leveraging AI with Location Data in Mobile Apps”

Reverse Engineering Core ML


Machine learning models are often black boxes to end users. Without access to the underlying model architecture and parameters, they are nearly impossible to reconstruct with inputs and outputs alone.

Hosting a model in the cloud effectively prevents access to these underlying structures. Without breaching the hosting servers, an attacker has no access to the model: they can’t look at the layers, get the trained weights, or even see the framework it’s running on.

Continue reading “Reverse Engineering Core ML”

Snapchat Lens Creator Spotlight: Alie Jackson


Alie Jackson has pretty much done everything. An extremely talented (and awarded) multimedia artist, she’s worked with everyone from Nike to Disney to Sony Pictures.

The Most Artistic Lens Creator of Lens Fest 2019, her work, a blend of AR and traditional media, is downright cool. Insightful, funny, bright, engaging, beautiful — I don’t have enough adjectives to describe Jackson’s style and work. You’ll just have to see for yourself.

Continue reading “Snapchat Lens Creator Spotlight: Alie Jackson”

wix banner square