This is the final post in our series covering the ins and outs of working with Google’s AutoML Vision Edge platform.
We started our journey by training a simple machine learning model in 3–4 hours. From there, we exported the model into various formats provided by AutoML.
Let’s have a quick look at our journey so far:
- The first post of the series was about Training and Running Models with TFlite. The TFLite format is used by mobile developers and those working with embedded edge devices (i.e. microcontrollers, RPi, etc).
- In the second post, we used Python to run the popular protobuf format available in AutoML (otherwise known as TensorFlow SavedModel format):
- We also used JS to run AutoML Vision Edge Tf.js models, the format that also supports browsers and Node.js servers. With this post, we learned the advantages of running Vision Edge models on both client and server-side with a single language—i.e. JavaScript
- And then we used Docker containers to run AutoML models. We learned the benefits and limitations of containerization.
Until now, we’ve explored almost all the formats provided by AutoML to load models. Every format has its own pros and cons, and its better to understand them before getting started, rather than spending excessive time and money on your own experimentation.
In this post, we’ll conclude by summarizing all the learnings from our previous posts.
Learnings with Different Model Formats
1. AutoML TFLite Format
- Lightweight solution for mobile and embedded devices
- Integration with development environment is super easy
- Once trained, it can’t be retrained by adding data
- Very low-latency inference
- Model size is smaller compared to the original model
- Great documentation support
Documentation:
2. AutoML TensorFlow SavedModel Format
- Easy integration with Python along with its ecosystem
- It’s a frozen graph with input and output nodes already defined at the time of freezing it
- Once trained, it can’t be retrained by adding data
- Higher accuracy while running model inference
- Model size is larger compared to other AutoML formats
- Inference speed is slower than in TFlite and other formats
- Difficult documentation for a beginner
You can learn more from these resources:
General SavedModel Format Documentation:
Documentation:
3. AutoML TensorFlow.js Format
- Easy integration with JS and Node.js servers
- Can work both client and server-side with JS
- Average accuracy—somewhere between TFLite and TensorFlow SavedModel formats
- Can’t be retrained
- Average inference speed as compared to TFLite and TensorFlow SavedModel format
- Very difficult documentation for Node.js support
General Documentation:
Documentation:
4. AutoML Container Format
- Need a special Docker dependency
- Easy to set up and serve models on a port by using TensorFlow serving
- Once trained, it can’t be retrained by adding data
- The best approach is to decouple ML from remaining codebase
- Documentation is not clear and has some bugs
Conclusion
In this series, we’ve covered all the different model formats provided by Google’s AutoML Vision Edge platform: TFLite, TensorFlow.js, Docker containers, and TF SavedModel formats.
We also learned about the ways in which the TensorFlow.js models can be used on the client as well as server-side, explored the best coding languages to work with specific model formats, and tested them for accuracy and inference speeds.
In this final post, we’ve highlighted each of the formats in detail, comparing their benefits and limitations.
In general, AutoML Vision Edge is a great tool to explore for developers who want to create awesome edge machine learning applications without understanding the complex mathematics behind the scenes. Learning about different model formats provides us with a 360-degree look of AutoML Vision Edge.
If you’re working on an ML project that involves classification problems, you should definitely give AutoML Vision Edge a try—this AutoML Vision Edge series is a great place to start!
If you liked the article, please clap your heart out. Tip — Your 50 claps will make my day!
Want to know more about me? Please check out my website. If you’d like to get updates, follow me on Twitter and Medium. If anything isn’t clear or you want to point out something, please comment down below.
Comments 0 Responses