In this article, we will continue building production machine learning systems on GCP with a special focus on the design of hybrid ML systems which covers the following:
- Building hybrid cloud machine learning models
- Kubeflow for hybrid cloud
- Optimize Tensorflow graphs for mobile
This is a descriptive series at a high-level, there will be another series on implementing some of these standard concepts but if you would love to get fully hands-on before then, I suggest you take the Advance Machine Learning with Tensorflow on GCP course by google ML-Team for a start.
Full cloud native, like Google Cloud, is a great place to do machine learning with access to ready-made models like the vision API, translation, speech, natural language APIs, etc., and few to no worries about infrastructures. But sometimes ready-to-run model APIs don’t fit well in some use cases.
You would probably want to retrain the ready-made models with your own data for better performance along other measures. This is one reason to potentially avoid cloud only ML, other reasons include:
- On-Premises Infrastructure: Tied to on-premises infrastructure at the beginning of the project with the aim to move to the cloud in the nearest future.
- Data Privacy: There might be a constraint to data movements probably due to data privacy.
- Multi-Cloud System Architecture: Maybe some components of the architecture dependent on existing applications are running on different cloud platforms.
- Running ML on the Edge: You might be running both model training and inference on the edge in which all processes are handled on client devices or possibly extracting features on the edge, but training and inferencing on the…
Comments 0 Responses