As the spread of COVID 19 continues, communities are told to reduce close contact between individuals. This is called Social Distancing, as it is a necessary and effective way to slow down the spread of the virus. As a data science student, I came with a solution to identify whether people actually follow the social distancing protocol of staying at least 6 feet from each other.
YOLO stands for You Only Look Once. It’s a fast operating object detection system that can recognize various object types in a single frame more precisely than other detection systems.
According to their documentation, it can predict up to 9000 classes. YOLO is built on a single CNN (convolutional neural network). The CNN separates an image into regions and calculates its bounding box and probabilities for each region. In YOLOv4, it uses CSPDarknet53 (CNN enhancement for increasing the learning capability), which is a lot faster than EfficientDet (YOLOv3 CNN enhancement), MaskRCNN, and RetinaNET.
Because YOLO can be used with a conventional GPU, it provides widespread adoption, faster FPS, and more accuracy.
- Python version 3.7.7
- CUDA Toolkit(Latest version) — Check the compatibility of your GPU
- OpenCV 4.1.0
Implementation, in 5 Steps:
- Calculate Euclidean distance between two points
- Convert center coordinates into rectangle coordinates
- Filter the person class from the detections and get a bounding box centroid for each person detected
- Check which person bounding boxes are close to each other
- Display risk analytics and risk indicators
Let’s Get to Coding… 💻
Step 1. Calculate Euclidean Distance of Two Points
We need to calculate the Euclidean distance in order to identify the distance between two bounding boxes. The function is_close gets two points, p1 and p2, as inputs for calculating the Euclidean distance and returns the calculated distance dst.
Step 2. Converts Center Coordinates into Rectangle Coordinates
The function convertBack gets parameters x, y—the midpoint of the bounding box—and wand h—the width and height of the bounding box—as inputs. Then it will covert the center coordinates to rectangle coordinates and return the converted coordinates, xmin, ymin, xmax, and ymax.
Step 3. Filtering the Person Class from Detections and Getting a Bounding Box Centroid for Each Person Detection
First, we check whether there is any detection in the image using a len(detections)>0 condition. If the condition succeed, we create a dictionary called centroid_dict and a variable called objectId set to zero. Then it will run through another condition that filters out all the other detection types, except for person. We’ll store the center points of all the person detections and append it to the bounding box for persons detected.
Step 4. Check which person bounding boxes are close to each other
First, we create a list that contains all object IDs of the under-threshold distance conditions. Through iteration, we can get all the combinations of close detection to calculate the Euclidean distance using the function from Step 1.
Then we set the social distance threshold (75.0) (equivalent to 6 feet) and check whether it satisfies the condition distance < 75.0. Finally, we set colors for the bounding boxes. Red for an at-risk person and Green for a protected person.
Step 5. Display risk analytics and risk indicators
In this final module, we indicate the number of people at risk in a given frame. Starting from the text we want to display with a counted number of people at risk. For better graphical representation, Using for check in range, We tell the application to draw the lines between the nearby bounding boxes and iterate through the red_line_list.
Let’s see the demo:
I hope that this has given you enough knowledge into developing your own social distance detector by using any object detector. For more information on YOLO, you can check out the official docs.
That’s it for the article. Thank you for spending your valuable time reading the article. Check my other articles as well. Stay safe!