Knowledge distillation is a model compression technique whereby a small network (student) is taught by a larger trained neural network (teacher). The smaller network is trained to behave like the large neural network. This enables the deployment of such models on small devices such as mobile phones or other edge devices. In this guide, we’ll look at a couple of papers that attempt to tackle this challenge.
One question app developers must ask themselves is: What am I going to do when users communicate that they feel laggy while using a given app? The answer isn’t always immediately clear, but most of the time it has to do with CPU-intensive tasks that block the main thread, and there are also cases where these kinds of performance issues are related to memory.
Doing cool things with data!
Tracking is an important problem in the domain of computer vision. It involves tracking an object through a sequence of frames. An ID is assigned to an object the first time it appears, and then this ID is carried forward in subsequent frames.
Editor’s note: This tutorial illustrates how to get started forecasting time series with LSTM models. Stock market data is a great choice for this because it’s quite regular and widely available to everyone. Please don’t take this as financial advice or use it to make any trades of your own.
In this tutorial, we’ll build a Python deep learning model that will predict the future behavior of stock prices. We assume that the reader is familiar with the concepts of deep learning in Python, especially Long Short-Term Memory.
In my last blog, I talked about how devs can use Kotlin coroutines to efficiently handle long-running tasks in their apps:
The method outlined works well when the user is using your app, but as soon as the user exits the app, the system kills the app and all the processes spawned by it. I faced this issue while working on AfterShoot when I had to run my machine learning model through all of a given user’s images.
There are various techniques for handling text data in machine learning. In this article, we’ll look at working with word embeddings in Keras—one such technique. For a deeper introduction to Keras refer to this tutorial:
Image colorization is an engaging topic in the field of image-to-image translation. Even though color photography was invented in 1907, It didn’t become popular for the average person until the 1960s because of its expensiveness and inaccessibility. All the photography and videography up until then was done on Black & White. Colorizing these images was impossible—until the DeOldify deep learning model came to life.
When we learn something in our daily lives, similar things become very easy to learn because—we use our existing knowledge on the new task. Example: When I learned how to ride a bicycle, it became very easy to learn how to ride a motorcycle because in riding the bicycle, I knew I had to sit and maintain balance, hold the handles firmly, and peddle to accelerate. In using my prior knowledge, I could easily adapt to a motorcycle’s design and how it could be driven. And that is the general idea behind transfer learning.
In this post, we’re going to learn the foundations of a very famous and interesting dimensionality reduction technique known as principal component analysis (PCA).
Specifically, we’re going to learn what principal components are, how data is concentrated within them, and learn about their orthogonality properties that make extraction of important data easier.
In other words, Principal component analysis (PCA) is a procedure for reducing the dimensionality of the variable space by representing it with a few orthogonal (uncorrelated) variables that capture most of its variability.
In this post, we’re going to dive deep into one of the most popular and simple machine learning classification algorithms—the Naive Bayes algorithm, which is based on the Bayes Theorem for calculating probabilities and conditional probabilities.
Before we jump into the Naive Bayes classifier/algorithm, we need to know the fundamentals of Bayes Theorem, on which it’s based.