Автор: Rachid Guerraoui, Nirupam Gupta, Rafael Pinot
Издательство: Springer
Серия: Machine Learning: Foundations, Methodologies, and Applications
Год: 2024
Страниц: 180
Язык: английский
Формат: pdf, epub
Размер: 10.1 MB
Today, Machine Learning algorithms are often distributed across multiple machines to leverage more computing power and more data. However, the use of a distributed framework entails a variety of security threats. In particular, some of the machines may misbehave and jeopardize the learning procedure. This could, for example, result from hardware and software bugs, data poisoning or a malicious player controlling a subset of the machines. This book explains in simple terms what it means for a distributed Machine Learning scheme to be robust to these threats, and how to build provably robust Machine Learning algorithms. Studying the robustness of Machine Learning algorithms is a necessity given the ubiquity of these algorithms in both the private and public sectors. Accordingly, over the past few years, we have witnessed a rapid growth in the number of articles published on the robustness of distributed Machine Learning algorithms. We believe it is time to provide a clear foundation to this emerging and dynamic field. By gathering the existing knowledge and democratizing the concept of robustness, the book provides the basis for a new generation of reliable and safe Machine Learning schemes.
In addition to introducing the problem of robustness in modern Machine Learning algorithms, the book will equip readers with essential skills for designing distributed learning algorithms with enhanced robustness. Moreover, the book provides a foundation for future research in this area.
AI systems make mistakes. Sometimes they are anecdotal, as in the context of games, but some mistakes can have a consequential impact on critical applications such as healthcare and banking. Although the media alert the public about the dangers of a so-called “strong AI,” they should highlight the weaknesses of the existing AI systems and the serious damage that can result from these systems when they malfunction. We present a set of methods for building a new generation of robust AI systems. Before discussing the scope of our book and the nature of our robustness methods, we first discuss the very meaning of AI and the main sources of fragility in existing AI-based technologies.
Building Machine Learning schemes that are robust to these events is paramount to transitioning AI from being a mere spectacle capable of momentary feats to a dependable tool with guaranteed safety. In this book, we cover some effective techniques for achieving such robustness. Essentially, we present Machine Learning algorithms that do not trust any individual data source or computing units. We assume that a majority of data sources and machines are trustworthy; otherwise, no correct learning can be ensured. But, we consider that a fraction of those data sources and machines can be faulty. The technical challenge is due to the fact that we consider that, a priori, these adversarial data sources and computing units are indistinguishable from correct ones. We contrast the robust methods that can withstand various types of faults with classic methods that can be corrupted as soon as either a data source is poisoned or a computing unit is hacked. From a high-level perspective, we show how to go incrementally from a traditional learning technique, which is inherently fragile, to new forms of learning based on robust methods.
This book is intended for students, researchers, and practitioners interested in AI systems in general, and in Machine Learning schemes in particular. The book requires certain basic prerequisites in linear algebra, calculus, and probability. Some understanding of computer architectures and networking infrastructures would be helpful. Yet, the book is written with undergraduate students in mind, for those without advanced knowledge in mathematics or Computer Science.
Скачать Robust Machine Learning Distributed Methods for Safe AI