Автор: Osvaldo Simeone
Издательство: Cambridge University Press
Год: 2023
Страниц: 602
Язык: английский
Формат: pdf (true)
Размер: 17.0 MB
This self-contained introduction to Machine Learning, designed from the start with engineers in mind, will equip students with everything they need to start applying Machine Learning principles and algorithms to real-world engineering problems. With a consistent emphasis on the connections between estimation, detection, information theory, and optimization, it includes: an accessible overview of the relationships betweenMachine Learning and signal processing, providing a solid foundation for further study; clear explanations of the differences between state-of-the-art techniques and more classical methods, equipping students with all the understanding they need to make informed technique choices; demonstration of the links between information-theoretical concepts and their practical engineering relevance; reproducible examples using MATLAB, enabling hands-on student experimentation. Assuming only a basic understanding of probability and linear algebra, and accompanied by lecture slides and solutions for instructors, this is the ideal introduction to Machine Learning for engineering students of all disciplines.
Advances in Machine Learning and Artificial Intelligence (AI) have made available new tools that are revolutionizing science, engineering, and society at large. Modern Machine Learning techniques build on conceptual and mathematical ideas from stochastic optimization, linear algebra, signal processing, Bayesian inference, as well as information theory and statistical learning theory. Students and researchers working in different fields of engineering are now expected to have a general grasp of Machine Learning principles and algorithms, and to be able to assess the relative relevance of available design solutions spanning the space between model-and data-based methodologies. This book is written with this audience in mind.
In approaching the field of Machine Learning, students of signal processing and information theory may at first be ill at ease in reconciling the similarity between the techniques used in Machine Learning – least squares, gradient descent, maximum likelihood – with differences in terminology and emphasis (and hype?). Seasoned signal processing and information-theory researchers may in turn find the resurgence of machine learning somewhat puzzling (“didn’t we write off that technique three decades ago?”), while still being awed by the scale of current applications and by the efficiency of state-of-the-art methods. They may also pride themselves on seeing many of the ideas originating in their communities underpin Machine Learning solutions that have wide societal and economic repercussions.
Existing books on the subject of Machine Learning come in different flavors: Some are compilations of algorithms mostly intended for computer scientists; and others focus on specific aspects, such as optimization, Bayesian reasoning, or theoretical principles. Books that have been used for many years as references, while still relevant, appear to be partly outdated and superseded by more recent research papers.
In this context, what seems to be missing is a textbook aimed at engineering students and researchers that can be used for self-study, as well as for undergraduate and graduate courses alongside modules on statistical signal processing, information theory, and optimization. An ideal text should provide a principled introduction to machine learning that highlights connections with estimation, detection, information theory, and optimization, while offering a concise but extensive coverage of state-of-the-art topics and simple, reproducible examples. Filling this gap in the bookshelves of engineering libraries is the ambition of this book.
Intended Audience:
This book is intended for a general audience of students, engineers, and researchers with a background in probability and signal processing. To offer a self-contained introduction to these intended readers, the text introduces supervised and unsupervised learning in a systematic fashion – including the necessary background on linear algebra, probability, and optimization – taking the reader from basic tools to state-of-the-art methods within a unified, coherent presentation.
Скачать Machine Learning for Engineers