Table of contents:
A. Overview of sets
B. How sets work under the hood
C. Operations for sets
D. Collection type implementation in Swift (copy-on-write)
Today, we’re excited to launch Fritz Hair Segmentation, giving developers and users the ability to alter their hair with different colors, designs, or images.
Try it out for yourself on Android. You can download our demo app on the Google Play Store to play around with hair coloring.
In the last piece in this series on developing with Flutter, we looked at how we can implement [image labeling using ML Kit, which belongs to the Firebase family.
In this 7th installment of the series, we’ll keep working with ML Kit, this time focusing on implementing face detection. The application we build will be able to detect human faces in an image, like so:
Continue reading Face Detection in Flutter Using Firebase’s ML Kit
With recent advancements in deep learning and artificial intelligence, machines can now do increasingly complicated things. Those things can be related to image, video, audio, or other complex data. Today, we have a massive amount of data, and we also have adequate infrastructure to process that data and make use of them.
Nowadays, there are applications available for cell phones that predict your age. But have you ever thought about how these apps can tell your age? Here comes the role of deep and machine learning. The model detects your face and passes the face data through a deep learning classifier that returns your (approximate) age.
Continue reading Designing an Age Classification Model with Deep Learning
Nowadays, lots of mobile applications like Office Lens and Genius Scan can turn a physical document into a digital one using only a phone—you actually no longer need to have a bulky, costly scanner.
Continue reading Comparing Apple’s and Google’s on-device OCR technologies
As most ML practitioners realize, developing a predictive model in Jupyter Notebook and making the predictions with excel data may not help you build the predictive models required at enterprise scale. To build the model at such a scale, you will need to consider several requirements and use various tools/frameworks that are especially designed to meet the purpose of this expansion.
Continue reading Enterprise Scale ML Jumpstart Kit — FastAI + RabbitMQ + Docker
The objective of data science projects is to make sense of data to people who are only interested in the insights of that data. There are multiple steps a Data Scientist/Machine Learning Engineer follows to provide these desired results. Data pre-processing (Cleaning, Formatting, Scaling, and Normalization) and data visualization through different plots are two very important steps that help in building machine learning models more accurately.
Continue reading Data Pre-processing and Visualization for Machine Learning Models
Basically, segmentation is a process that partitions an image into regions. It is an image processing approach that allows us to separate objects and textures in images. Segmentation is especially preferred in applications such as remote sensing or tumor detection in biomedicine.
There are many traditional ways of doing this. For example; point, line, and edge detection methods, thresholding, region-based, pixel-based clustering, morphological approaches, etc. Various methods have been developed for segmentation with convolutional neural networks (a common deep learning architecture), which have become indispensable in tackling more advanced challenges with image segmentation. In this post, we’ll take a closer look at one such architecture: u-net.
Continue reading Deep Learning for Image Segmentation: U-Net Architecture
Following up on my last blog post on training an image labeling model using Google Cloud AutoML (linked below), in this second blog post in the series; we’ll look into how to train yet another model to identify and locate objects within an image instead—an object detection model!
If you haven’t read my blog on image labeling, you can read it here:
Continue reading Creating a TensorFlow Lite Object Detection Model using Google Cloud AutoML
In 2015, Snapchat, the incredibly popular social content platform, added Lenses to their mobile app —augmented reality (AR) filters that give you big strange teeth, turn your face into an alpaca, or trigger digital brand-based experiences.
In addition to AR, the other core underlying technology in Lenses is mobile machine learning — neural networks running on-device that do things like create a precise map of your face or separate an image/video’s background from its foreground.
Continue reading Creating a Style Transfer Snapchat Lens with Fritz AI and SnapML in Lens Studio