Continual and Reinforcement Learning for Edge AI: Framework, Foundation, and Algorithm Design

Автор: literator от Сегодня, 03:29, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Continual and Reinforcement Learning for Edge AI: Framework, Foundation, and Algorithm Design
Автор: Hang Wang, Sen Lin, Junshan Zhang
Издательство: Springer
Серия: Synthesis Lectures on Learning, Networks, and Algorithms
Год: 2025
Страниц: 269
Язык: английский
Формат: pdf
Размер: 10.5 MB

This book provides a comprehensive introduction to continual and reinforcement learning for Edge AI, which investigates how to build an AI agent that can continuously solve new learning tasks and enhance the AI at resource-limited edge devices. The authors introduce readers to practical frameworks and in-depth algorithmic foundations. The book surveys the recent advances in the area, coming from both academic researchers and industry professionals. The authors also present their own research findings on continual and Reinforcement Learning for Edge AI. The book also covers the practical applications of the topic and identifies exciting future research opportunities.

One of the key driving forces behind AI is the development of Deep Learning and deep neural networks (DNNs) since 2010s, which have achieved astonishing successes in solving ML problems and demonstrated great superiority over classical ML approaches, e.g., decision tree and Baysian networks. Notably, consisting of a series of layers, artificial neural networks (ANNs) can extract the underlying features from data in a hierarchical manner and provide a universal function approximator for ML problems. Multiplayer Perceptrons (MLPs) are the most basic ANNs with fully connected neurons and non-linear activation functions. To capture the spatial correlation in the input data, especially for images, Convolution Neural Networks (CNNs) replace the basic linear operations in MLPs with convolution operations, making them very popular for computer vision tasks. Recurrent Neural Networks (RNNs) are another type of ANNs which specialize in handling sequential data and hence are widely used for natural language processing (NLP) tasks, e.g., machine translation and question answering. Unlike feedforward neural networks such as MLPs and CNNs which process data in a single pass, RNNs are able to process data across multiple time steps, with an internal memory to remember the knowledge of previous inputs and use that for learning at current time steps.

Another special kind of ANN architectures is the generative model, which aims to solve generative tasks, e.g., image generation. Generative adversarial networks (GANs) are in this category, which consist of two separate networks, namely generator network and discriminator network. The generator seeks to generate new data to mimic real data in the training dataset, whereas the discriminator seeks to distinguish the fake data generated by the generator from the real data. Recently, another powerful type of generative models named as diffusion models has attracted much attention, which gradually add random noise to the input data and then learn to reconstruct desired data samples from the noise by reversing the diffusion process. In 2017, a new ANN architecture, namely Transformer, was proposed to address the limitation of RNN-based encoder-decoder architecture in solving sequence-to-sequence tasks, by leveraging the attention mechanism to capture the long-range dependencies across data inputs in a highly parallel manner. Due to its superior performance and computational efficiency, the Transformer now becomes the mainstream architecture and is widely used in pretrained foundation models such as large language models.

Скачать Continual and Reinforcement Learning for Edge AI: Framework, Foundation, and Algorithm Design




ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!


Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.