As much as our physical well-being is important, our mental well-being is essential as well. Many of us don’t really give as much priority to the mental health of ourselves and others. But mental health is a serious issue in today’s world, as many suffer in myriad ways.
Hence, motivation is very necessary. Positive thoughts and surroundings, while not a catch-all, can be one part of helping improve one’s mental perspective.
Since we’re going to work with technology and programming here, we’ll try to offer a solution in our own way by devising an ML-powered “Motivation Bot.” The idea is to use a webcam feed to monitor the emotions of a user and play motivational music when the system detects a sad or otherwise negative moment.
The underlying technology here is a face detection ML model using the TensorFlow.js library. The strategy is not to feed the bad dog, and the good dog will eventually win.
We aim to resonate with users by providing them motivational music in the more negative moments by using face detection and emotion classification with TensorFlow.js.
Let’s get started!
Requirements
For a starter boilerplate app, we’re going to use the FrontEnd-EmotionDetection repo created by Kevin Hsiao. For our purposes, we just need to find some music that motivates us and makes us feel good.
This starter template provides two methods for face detection: the Chrome Shape Detection API and face-api.js. In this tutorial, we’re going to use the second one that—face-api.js.
Overview
Detecting faces
Here, we’re going to use the face-api.js JavaScript API for face detection in the browser in a Node.js app with TensorFlow.js.
Datasets
We combine two datasets:
- Microsoft FERPlus to train the emotion detection model.
- Real-world Affective Faces Database (RAF-DB) as our face detection dataset.
Convert Model
We’re going to use the TensorFlow.js converter to convert the Keras model to a .json file for loading and running inference in JavaScript.
Code Walkthrough
Next, we’ll work through the code implementation for this project.
First, let’s discover the folder structure:
- ./dist folder contains JavaScript library code
- ./models directory contains the model-related coding implementation
- ./src directory contains the main JavaScript code for interface and feature implementation.
Next, we’ll discover how the code in our main index.js file works. The steps involved are:
- Initially loading models
- Loading webcam feed
- Start monitoring
Then, we classify emotions into seven states in order to display real-time reactions. The seven emotional states are depicted as an array with corresponding colors of the bounding box drawn around the face as well:
Next, we need to create a function to display an emotion label on the video. The function here is called onPlay, which takes in a video element as a parameter.
In this function, we need to make use of the faceapi module in order to get the facial data. The detectAllFaces method, taking a video element and instance of the faceapi’s TinayFaceDetectorOption as a parameter, gives us the face detection result.
The overall coding implementation is provided in the snippet below:
Next, we start predicting emotions by calculating the face data, such as its width, height, and (x, y) coordinates on the screen.
Then, we pre-process the image and pass this processed image data to the emotion classification model. After that, we use the result to draw a rectangular box and display an image label wrapped around the face on the canvas.
The coding implementation for this is provided in the snippet below:
Play motivational music
Now, we’re ready to classify the real-time emotion of our users.
Remember, we’re attempting to help users reorient negative emotions and bring about more positive emotions and attitudes by playing a motivational music clip when negative emotion (on a human face) is detected by the webcam.
First, we start by finding some music that can motivate us (you can choose whatever suits you). Then, we integrate the audio music file to the audio element, as directed in the code snippet below:
Next, we need to add a conditional to handle the audio elements. First, we fetch the emotion data from the predicted object for sad emotions, initially at count 4.
Then, we use the sad_counter variable to count the number of seconds that we have sad emotions. And when the sad_counter reaches more than 10 seconds, we start playing the music. Subsequently, when there’s a change in our emotional expression, we stop the music after a delay of another 10 seconds and reset the sad_counter value.
First, we need to define a counter variable and flags, as directed in the code snippet below:
In order to trigger this function, we call the function into the predict function, as directed below:
Results
With this, we’re now done with the implementation. It’s time to test our creation! We should see results similar to the ones displayed in the demo below:
Here, we can notice our detected emotion being logged into the web console as well.
The live demo of the implementation is available in the sandbox below:
Hence, we have successfully implemented the Motivational Bot that detects facial emotions and plays motivational music.
Conclusion
In this tutorial, we built a browser-based “Motivational Bot” using TensorFlow.js, leveraging faceapi.js to capture the facial data. Additionally, we got and stepwise guidance on pre-processing the captured facial data and detecting a user’s emotion with an emotion classifier. This project can definitely adapt to work with numerous other real-life use cases.
Positivity, motivation, and mental strength support our mental well-being. This project attempts to offer one-way technology might help us better manage our internal motivation and well-being (though remember, it is for demo purposes, and is not a catch-all).
In terms of the next steps, we could add the capacity to recognize various other facial emotions. We could also adjust the response from our system—whether it’s audio, videos, or otherwise. You can create your own real-life problem-solving projects using the TensorFlow.js library.
Comments 0 Responses