Lobe, owned by Microsoft, is a free, no-code tool to train machine learning models without technical skills. Only image classification is supported as of this writing, and an object detection model training is coming soon according to Lobe’s homepage. You can download it here by entering your basic info. It’s 608 mb.
Lobe is suitable to use it when you would like to keep your images private and train the model on your PC freely. You can also use it to create models for both Android (Tensorflow Lite) and iOS (Core ML).
Lobe welcomes you with a tour, documentation, and more, as you see below. Let’s create a new project and see how easy it is to use.
As of writing this article, you can import your image dataset only with folders so you can’t use CSV files. But Lobe does allow you label your images if they are not in labeled or categorized in folders yet.
I will use this face-mask classification dataset on Kaggle which has images categorized into two folders as with_mask and without_mask. Download this dataset and click Import and select Dataset in Lobe.
After importing your dataset, Lobe starts training immediately.
Lobe automatically splits 20% of your to dataset to test your model. Test images are a random subset of your examples are not used during training.
In the left panel you can see your dataset details. You can click on these folder names to check your images.
Lobe shows the training process live in the lower area in the left panel. There you can see how many images it predicts correctly or incorrectly.
You can leave the app while training, as it notifies you with a click sound when training finishes. For this dataset -which has 440 images- training finished under five minutes. When training finishes, you can check the results and see the correct/incorrect classifications that your model made.
Hovering over the image will show you the confidence score of the model.
To see the accuracy of your model on test images, you can select View > Test Images. It will show your model’s accuracy on test images it hasn’t seen before.
In the Play section, you can drag-drop new images or take photos with your webcam. Lobe will run the trained model with this new image and you can see how good your model does with the new images.
Here, you can try to trick your model and see patterns where it is weak. You can also help improve your model by giving feedback on its predictions by clicking the checkmark button to add the image to your dataset. Lobe automatically trains the model with these new images.
When you’re satisfied with your model you can export it by clicking File->Export. Lobe allows exporting to variety of formats like TensorFlow Lite and Core ML.
We’ll export our model as Core ML format for this tutorial. When you try to export, Lobe asks whether you want to optimize your model. Optimizing performs additional training and can take much longer, but it will keep training as long as model is improving.
It exports several files: readme with a sample Swift code to run the model, your model as mlmodel and signature file which contains information about your Lobe project.
With Xcode 12+, you can test image classification models easily. Open the mlmodel file in Xcode and drag some images to the left panel. Click on the images to see how precise your model predictions are.
If you need any starter projects to use this exported model, Lobe offers very handy projects on Github. Starter projects take care of many things like opening the camera, running the model using Vision, and showing results in the UI.
For the iOS sample project, they use SwiftUI which made this author very happy. Just replace the mlmodel in this iOS sample project and you are ready to go with your new image classification app!
In this post, we’ve learned how to use Microsoft’s new machine learning tool, Lobe, to train image classification models on our PC. We found a face-mask classification dataset from Kaggle and trained/tested the model using this new no-code tool. Personally, I found it very easy and fun to use Lobe. I hope more machine learning models like object detection and segmentation will come to it soon.
Thanks for reading!