Автор: Song Guo, Qihua Zhou
Издательство: CRC Press
Год: 2023
Страниц: 268
Язык: английский
Формат: pdf (true)
Размер: 30.3 MB
This book aims at the tiny machine learning (TinyML) software and hardware synergy for edge intelligence applications. It presents on-device learning techniques covering model-level neural network design, algorithm-level training optimization, and hardware-level instruction acceleration.
Analyzing the limitations of conventional in-cloud computing would reveal that on-device learning is a promising research direction to meet the requirements of edge intelligence applications. As to the cutting-edge research of TinyML, implementing a high-efficiency learning framework and enabling system-level acceleration is one of the most fundamental issues. This book presents a comprehensive discussion of the latest research progress and provides system-level insights on designing TinyML frameworks, including neural network design, training algorithm optimization and domain-specific hardware acceleration. It identifies the main challenges when deploying TinyML tasks in the real world and guides the researchers to deploy a reliable learning system.
In recent years, modern AI-oriented applications, such as computer vision, natural language processing (NLP), Big Data analytics and automatic robotic processing, have all benefited greatly from the use of Machine Learning (ML) techniques. For practical purposes, many of these applications rely on large-scale datasets for model training in the cloud environment, which necessitates a great demand for computing resources. While several in-cloud learning systems have been developed, they often fall short of the rising trend of enabling edge-intelligent perception to function on tiny devices by just utilizing personal data. It is possible to group the primary disadvantages of in-cloud learning into three categories in this situation. First, in-cloud computing often face the concerns of privacy and security. The sensitive information of users, such as intermediate training results and model parameters, is at risk of being intercepted and revealed to the cloud carrier since they must be sent over the network and kept there. Second, the in-cloud learning is designed to aggregate results from many different devices in a data-parallel way, rather than offering personalized models for each user. It cannot provide individualized models for users in the same way that traditional training methods can. Third, it is feasible to run the application processing on cloud servers, but data transfer may take an extremely long time, particularly in a bandwidth-constrained environment, to provide the final output to the user. The real-time model updating will be hindered, and the processing throughput will be reduced as a result of this high delay.
The emergence of on-device learning, which conducts the whole Machine Learning process by user devices, has been seen as a great outcome to deal with the constraints of the in-cloud learning paradigm. Thus, edge devices are often used for on-device learning applications and such a learning paradigm is also commonly referred to as “edge intelligence”. Also, on-device learning mainly focuses on producing a high-quality individualized model in a resource-constrained platform, such as the TensorFlow Lite or PyTorch Mobile. For the design of a desired TinyML system, this combination of software and hardware synergy provides a roadmap to overcome device heterogeneity and resource limits, which are the keys to performance improvement.
This volume will be of interest to students and scholars in the field of edge intelligence, especially to those with sufficient professional Edge AI skills. It will also be an excellent guide for researchers to implement high-performance TinyML systems.
Скачать Machine Learning on Commodity Tiny Devices Theory and Practice