In this article, we’ll introduce the reader to Generative Adversarial Networks (GANs). We assume the reader has some prior experience with neural networks, such as artificial neural networks.
Here’s the plan of attack:
In this article, we’ll introduce the reader to Generative Adversarial Networks (GANs). We assume the reader has some prior experience with neural networks, such as artificial neural networks.
Here’s the plan of attack:
Below is a conversation about python class basics. To help follow along, you may need to understand Python functions.
Anita: Hi DarkAnita. I’ve been trying to improve my Python skills by using Python classes, but it just looks so much like gibberish to me. Python functions, I totally understand those…but classes? Do you understand them?
A good data scientist is not one who knows all the fancy algorithms but one who knows that he/she is overfitting. We all have been through that time when our super awesome, fully tuned model has failed to live up to the expectations on Kaggle private LB or after deployment. Knowing how to get an unbiased estimate of the predictive power of our model is important. There are different validation strategies like holdout and cross validation which are commonly used in practice for this. But which strategy is appropriate in which scenario is something that needs more discussion and thought.
Continue reading Machine Learning Model Evaluation & Selection
In Android application development, we often need to pass data between fragments—i.e. from a listing fragment to a detailed fragment—to update values.
Passing data between fragments in Android is very common, and to do this task, we’ve traditionally used Interface.
But with the introduction of Android Jetpack, there’s a different way to move data between fragments. In this tutorial, we’ll take a closer look at ViewModel and see how it compares to Interface.
Continue reading Passing Data between Fragments on Android Using ViewModel
Machine learning can help us take the best route back home, find a product that matches our needs, or even help us schedule hair salon appointments. If we take an optimistic view, by applying machine learning in our projects, we can make our lives better and even move society forward.
Mobile phones are already a huge part of our lives, and combining them with the power of machine learning is something that, in theory, can create user experiences that delight and impress users. But do we really need to add machine learning to our apps? And if so, what tools and platforms are currently at our disposal? That’s what we’ll talk about in this article.
Continue reading Machine learning on mobile devices: 3 steps for deploying ML in your apps
Augmented reality (AR) is all the rage these days, with AR-based apps becoming better each day. And now, with the release of the new iPad Pro with a LiDAR scanner and the latest version of the ARKit framework (version 3.5), it’s more important than ever to understand what makes this amazing technology tick.
Continue reading How ARKit 3.5 Enables Immersive Augmented Reality Experiences on iOS
We use loss functions to calculate how well a given algorithm fits the data it’s trained on. Loss calculation is based on the difference between predicted and actual values. If the predicted values are far from the actual values, the loss function will produce a very large number.
Keras is a library for creating neural networks. It’s open source and written in Python. Keras does not support low-level computation, but it runs on top of libraries like Theano and TensorFlow.
Continue reading How to create a custom loss function in Keras
If you’ve ever developed an iOS Vision app that process frames of a video buffer, you know that you need to be careful with your resources. You shouldn’t process each frame—i.e., where the user just moves the camera around.
In order to classify an image with high accuracy, you’ll need to capture a stable scene. This is crucial for apps that use Vision. In this tutorial, I’ll be diving into this problem and the solution Apple suggests.
Continue reading How to Capture the Best Frame in an iOS Image Processing App
Apple’s Core ML is a powerful machine learning framework with an easy-to-use drag-and-drop interface. And the latest iteration, Core ML 3, brought in lots of new layers and gave rise to updatable models.
In React Native apps, support for Scalable Vector Graphics (SVG) is provided by an open-source module called react-native-svg that’s maintained by the larger developer community.
Using SVG can enhance an app’s design when it comes to displaying different patterns. It can make a difference in how the look and feel of the app might appear to the end-user, as well how it is easy to edit the pattern built using SVG. SVG is mainly found on the web, and while they have similar uses to JPEG, PNG, and WebP image types, SVG is not resolution-dependent. Hence, the definition according to Wikipedia:
Continue reading How to create custom wavy headers using react-native-svg and Expo