A few weeks ago I looked at the speed of Core ML models running on various Apple devices. Apple’s new A12 Bionic chip paired with Core ML 2 made the latest generation of iPhones and iPads more than 10x faster than previous generations.
While faster is usually better, large differences in performance across devices make it hard for developers to keep user experience consistent. Apple’s (relatively) small product family and aggressive upgrade strategy mitigates this problem to some degree.
Continue reading “Benchmarking TensorFlow Mobile on Android devices in production”