Data augmentation involves the process of creating new data points by manipulating the original data. For example, for images, this can be done by rotating, resizing, cropping, and more.
This process increases the diversity of the data available for training models in deep learning without having to actually collect new data. This then, generally speaking, improves the performance of deep learning models.
Continue reading “Research Guide: Data Augmentation for Deep Learning”
Federated learning, transfer learning, and model personalization
For a healthcare research project I’m working on, I’m investigating for a federated learning platform that supports mobile and wearable platforms—in particular on the Apple ecosystem.
Federated learning represents a tremendous opportunity for the adoption of machine learning in many use cases, and especially where efficiency and privacy concerns require us to distribute the training process, instead of centrally collecting data on the cloud and applying traditional ML pipelines.
Continue reading “Swift loves TensorFlow and Core ML”
In a previous article, we reviewed some of the pre-eminent literature on pruning neural networks. We learned that pruning is a model optimization technique that involves eliminating unnecessary values in the weight tensor. This results in smaller models with accuracy very close to the baseline model.
In this article, we’ll work through an example as we apply pruning and view the effect on the final model size and prediction errors.
Continue reading “Pruning Machine Learning Models in TensorFlow”
Chatbots are a thing. Yes, they may seem gimmicky on the surface. But a lot has changed since the days of SmarterChild. Dig a little deeper and you’ll find there’s more to them than meets the eye.
First, chatbots are, essentially, gateways to voice apps. Whether you use Alexa, Siri, Google Assistant, or Cortana, building apps with a conversational interface is a new norm. Second, chatbots can provide a first line of support that users can feel comfortable interacting with.
Continue reading “Microsoft Azure’s QnA Maker: Making FAQs a bit more chatty”
When it comes to machine learning, Python has been dominant. However, we can already foresee how Python has a limit in terms of how far it can scale with modern ML demands.
Google seems to have considered this as it’s considered the future of ML (especially through the lens of TensorFlow). They realize that, in lieu of overhauling Python, a more modern and adaptable language could change the game.
Continue reading “Swifty ML: An Intro to Swift for TensorFlow”
“I am a student of computer science/engineering. How do I get into the field of machine learning/deep learning/AI?”
It’s never been easier to get started with machine learning. In addition to structured MOOCs, there is also a huge number of incredible, free resources available around the web.
Continue reading “Some Essential Hacks and Tricks for Machine Learning with Python”
I was working on a personal project on tumor detection and recognition and decided to us some image segmentation tools and techniques to refine my recognition model.
As I was working on this project, I found myself wondering—is it possible to do this on mobile? The answer is YES, and so I decided to give it a shot.
Continue reading “Simple Semantic Image Segmentation in an iOS Application — DeepLabV3 Implementation”
With over 2 billion active Android devices and over 1 billion active iOS users, the mobile market provides the most engaging and profitable market to build and sell new digital solutions. There are less than 4 million unique applications for each of these operating systems, with most of them performing the same or related functions.
However, the arrival of cloud-based and device-based artificial intelligence tools provides a unique opportunity to recreate the mobile experience for existing apps, as well as build entirely new mobile apps that can only be possible through the use of AI-powered tools.
Continue reading “5 App Ideas to Unleash the Power of Mobile Machine Learning”
In machine learning, overfitting is having your machine learning model study the data so well that the model somehow memorizes the data and loses the ability to predict correctly given an unseen situation.
Continue reading “Bias-Variance Tradeoff to Avoid Under/Overfitting”
In our last newsletter roundup, we looked digests for mobile development, and we covered it all: iOS, Android, React Native, and Cross-Platform. Again, my sincerest thanks to Heartbeat All-Star Zain Sajjad, whose help in putting these lists together has been invaluable.
For the last installation of our newsletter journey, let’s take a look at some of the premier newsletters for data science and analytics.
Continue reading “Newsletters for Data Science & Analytics”