Hands-on with Feature Selection Techniques: Advanced Methods

Articles

This article is the final part of our series covering techniques for feature selection in machine learning models. Since this is the end, I’d recommend circling back and checking out the rest of the articles in the series:

This post will be covering several advanced techniques for feature selection.

Dimensionality reduction isn’t quite the same as feature selection, even though both try to reduce the number of features. While feature selection selects and excludes some features without making any transformation, dimensionality reduction transforms features into a lower dimension.

Continue reading Hands-on with Feature Selection Techniques: Advanced Methods

How TensorFlow Lite Optimizes Neural Networks for Mobile Machine Learning

Articles

The steady rise of mobile Internet traffic has provoked a parallel increase in demand for on-device intelligence capabilities. However, the inherent scarcity of resources at the Edge means that satisfying this demand will require creative solutions to old problems. How do you run computationally expensive operations on a device that has limited processing capability without it turning into magma in your hand?

Continue reading How TensorFlow Lite Optimizes Neural Networks for Mobile Machine Learning

Linear Regression using TensorFlow 2.0

Articles

Are you looking for a deep learning library that’s one of the most popular and widely-used in this world? Do you want to use a GPU and highly-parallel computation for your machine learning model training? Then look no further than TensorFlow.

Created by the team at Google, TensorFlow is an open source library for numerical computation and machine learning. Undoubtedly, TensorFlow is one of the most popular deep learning libraries, and in recent weeks, Google released the full version of TensorFlow 2.0.

Continue reading Linear Regression using TensorFlow 2.0

Deep Learning with PyTorch: An Introduction

Articles

In this tutorial, you’ll get an introduction to deep learning using the PyTorch framework, and by its conclusion, you’ll be comfortable applying it to your deep learning models. Facebook launched PyTorch 1.0 early this year with integrations for Google Cloud, AWS, and Azure Machine Learning. In this tutorial, I assume that you’re already familiar with Scikit-learn, Pandas, NumPy, and SciPy. These packages are important prerequisites for this tutorial.

Continue reading Deep Learning with PyTorch: An Introduction

Machine Learning at the Edge — μML

Articles

The definition of an edge device can vary greatly from application to application, and it includes devices ranging from smartwatches to self-driving cars and everything in between. Currently, the edge devices with the largest numbers, which also have a connection to a network, is likely the smartphone.

There are increasingly a lot of other devices with small MCU’s (microcontrollers) that aren’t connected to any network which can be used for applications like intelligent sprinkler system for home garden.

Continue reading Machine Learning at the Edge — μML

5 TensorFlow techniques to eliminate overfitting in DNNs

Articles

Deep neural networks (DNNs) can have tens of thousands of parameters, and in some cases, maybe even millions. This huge number of parameters gives the network a huge amount of freedom and the flexibility to fit a high degree of complexity.

This flexibility is only good up to a certain level. When this level is crossed, the term overfitting is brought to the table.

Continue reading 5 TensorFlow techniques to eliminate overfitting in DNNs