Автор: Pete Warden, Daniel Situnayake
Издательство: O’Reilly Media, Inc.
Год: 2019-9-27
Страниц: 440
Язык: английский
Формат: epub
Размер: 25.7 MB
The goal of this book is to show how any developer with basic experience using a command line terminal and code editor can get started building their own projects running machine learning on embedded devices.
When I first joined Google in 2014 I discovered a lot of internal projects that I had no idea existed, but the most exciting was the work the OK Google team were doing. I discovered they were running neural networks that were just 14 kilobytes in size! They needed to be so small because they were running on the digital signal processors (DSPs) present on most Android phones, continuously listening for the “OK Google” wake words, and these DSPs only had tens of kilobytes of RAM and flash memory. The team had to use the DSPs for this job because the main CPU was powered off to conserve battery, and these specialized chips only use a few milliwatts of power.
Coming from the image side of deep learning I’d never seen networks so small, and the idea that you could use such low-power chips to run neural models stuck with me. As I worked on getting TensorFlow and later TensorFlow Lite running on Android and iOS devices I remained fascinated by the possibilities of working with even simple chips. I learned that there were other pioneering projects in the audio world, like Pixel’s Music IQ, for predictive maintenance like PsiKick, and even in the vision world with Qualcomm’s Glance camera module.
It became clear to me that there was a whole new class of products emerging, with the key characteristics that they used machine learning to make sense of noisy sensor data, could run using a battery or energy harvesting for years, and cost only a dollar or two. One term I heard repeatedly was “peel and stick sensors”, for devices that required no battery changes and could just be applied anywhere in an environment and forgotten. To make these products real, we needed ways to turn raw sensor data into actionable information locally, on the device itself, since the energy costs of transmitting streams anywhere have proved to be inherently too high to be practical.
This is where the idea of TinyML comes in. From long conversations with colleagues across industry and academia we’ve come to a rough consensus that if you can run a neural network model at an energy cost of below one milliwatt, it makes a lot of entirely new applications possible. This might seem like a somewhat arbitrary number, but if you translate it into concrete terms, it means a device running on a coin battery has a lifetime of a year. That gives a product that’s small enough to fit into any environment, able to run for a useful amount of time without any human intervention.
Скачать TinyML: Machine Learning with TensorFlow on Arduino and Ultra-Low Power Micro-Controllers (Second Early Release)