When we learn something in our daily lives, similar things become very easy to learn because—we use our existing knowledge on the new task. Example: When I learned how to ride a bicycle, it became very easy to learn how to ride a motorcycle because in riding the bicycle, I knew I had to sit and maintain balance, hold the handles firmly, and peddle to accelerate. In using my prior knowledge, I could easily adapt to a motorcycle’s design and how it could be driven. And that is the general idea behind transfer learning.
Author: Fritz
Articles Fritz has written:
Understanding the Mathematics behind Principal Component Analysis
ArticlesIn this post, we’re going to learn the foundations of a very famous and interesting dimensionality reduction technique known as principal component analysis (PCA).
Specifically, we’re going to learn what principal components are, how data is concentrated within them, and learn about their orthogonality properties that make extraction of important data easier.
In other words, Principal component analysis (PCA) is a procedure for reducing the dimensionality of the variable space by representing it with a few orthogonal (uncorrelated) variables that capture most of its variability.
Continue reading Understanding the Mathematics behind Principal Component Analysis
Understanding the Mathematics Behind Naive Bayes
ArticlesIn this post, we’re going to dive deep into one of the most popular and simple machine learning classification algorithms—the Naive Bayes algorithm, which is based on the Bayes Theorem for calculating probabilities and conditional probabilities.
Before we jump into the Naive Bayes classifier/algorithm, we need to know the fundamentals of Bayes Theorem, on which it’s based.
Continue reading Understanding the Mathematics Behind Naive Bayes
Step-by-Step Use of Google Colab’s Free TPU
ArticlesIn 2015, Google established its first TPU center to power products like Google Calls, Translation, Photos, and Gmail. To make this technology accessible to all data scientists and developers, they soon after released the Cloud TPU, meant to provide an easy-to-use, scalable, and powerful cloud-based processing unit to run cutting-edge models on the cloud.
Continue reading Step-by-Step Use of Google Colab’s Free TPU
Image Classification on Android using OpenCV
ArticlesThis tutorial uses the popular computer vision library OpenCV for building an image classifier that runs on Android devices.
The overall process looks like this. First, the color histogram of the hue channel from the HSV color space is extracted from the image dataset. Next, an artificial neural network (ANN) is built and trained by such features and then saved for later use in an Android app. An Android Studio project is then created, which imports the Android release of OpenCV. After being imported successfully, the saved trained ANN is loaded for making predictions.
Continue reading Image Classification on Android using OpenCV
Training a Tiny Pix2Pix GAN for Snapchat
ArticlesCat Face is the latest SnapML-powered Lens built by the team at Fritz AI. The Lens lets users draw photo-realistic cats and wear the result like a mask. If you haven’t already, try it out for yourself here:
Training a Multi-Label Image Classification Model with Google Cloud AutoML
ArticlesFollowing up from my earlier blogs on training and using TensorFlow models on the edge in Python, in this eighth blog post in this series, I’ll be talking about how to train a multi-label image classification model that can be used with TensorFlow.
Continue reading Training a Multi-Label Image Classification Model with Google Cloud AutoML
These are The News: A curated list of AI & machine learning newsletters
ArticlesFrom helping doctors make faster and more reliable diagnoses, to powering new frontiers in space exploration, AI has made a huge impact across industries, and we witness it every day.
Continue reading These are The News: A curated list of AI & machine learning newsletters
TensorFlow MLIR: An Introduction
ArticlesCurrently, different domains of machine learning software and hardware have different compiler infrastructures. There are number of challenges posed by this dynamic, including:
MLIR seeks to address this software fragmentation by building a reusable and extensible compiler infrastructure. In this piece, we’ll look at a conceptual view of MLIR.
MLIR seeks to promote the design and implementation of code generators, optimizers, and translators at various stages of abstraction across different application domains. The need for MLIR arose from the realization that modern machine learning frameworks have different runtimes, compilers, and graph technologies. For example, TensorFlow itself has different compilers for different frameworks.
Swift 5: Memory Management
ArticlesIt would be great if we as devs got to play with limitless memory and never had to care about working with it judiciously. Unfortunately, that isn’t true, and hence, we have to behave like a renter to the OS—rent the memory for a while, use the memory, and then hand it back.
Swift is a smart language, and it knows that many devs don’t like handing the memory back to the environment; hence, it keeps track of the allocated memory using a mechanism called ARC (automatic reference counting).