Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, 2nd Edition

Автор: literator от 3-03-2023, 17:43, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, 2nd EditionНазвание: Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control, 2nd Edition
Автор: Steven L. Brunton, J. Nathan Kutz
Издательство: Cambridge University Press
Год: 2022
Страниц: 616
Язык: английский
Формат: pdf (true)
Размер: 29.1 MB

Data-driven discovery is revolutionizing how we model, predict, and control complex systems. Now with Python and MATLAB, this textbook trains mathematical scientists and engineers for the next generation of scientific discovery by offering a broad overview of the growing intersection of data-driven methods, Machine Learning, applied optimization, and classical fields of engineering mathematics and mathematical physics. With a focus on integrating dynamical systems modeling and control with modern methods in applied Machine Learning, this text includes methods that were chosen for their relevance, simplicity, and generality. Topics range from introductory to research-level material, making it accessible to advanced undergraduate and beginning graduate students from the engineering and physical sciences. The second edition features new chapters on reinforcement learning and physics-informed Machine Learning, significant new sections throughout, and chapter exercises. Online supplementary material including lecture videos per section, homeworks, data, and code in MATLAB, Python, Julia, and R available on site.

Python code has been added throughout, in parallel to existing MATLAB code, and both sets of codes have been streamlined considerably. All extended codes are available in MATLAB and Python on the book’s website and GitHub pages. Wherever possible, a minimal representation of code has been presented in the text to improve readability. These code blocks are equivalently expressed in MATLAB and Python. In more advanced examples, it is often advantageous to use either MATLAB or Python, but not both. In such cases, this has been indicated and only a single code block is demonstrated. The full code is available at the above GitHub sites as well as on the book’s website. In addition, extensive codes are available in R online. We encourage the reader to read the book and follow along with code to help improve the learning process and experience.

Reinforcement Learning (RL) is a major branch of machine learning that is concerned with how to learn control laws and policies to interact with a complex environment from experience. Thus, RL is situated at the growing intersection of control theory and Machine Learning, and it is among the most promising fields of research towards generalized Artificial Intelligence and autonomy. Both Machine Learning and control theory fundamentally rely on optimization, and, likewise, RL involves a set of optimization techniques within an experiential framework for learning how to interact with the environment. In Reinforcement Learning, an agent1 senses the state of its environment and learns to take appropriate actions to optimize future rewards. The ultimate goal in RL is to learn an effective control strategy or set of actions through positive or negative reinforcement. This search may involve trial-and-error learning, model-based optimization, or a combination of both. In this way, reinforcement learning is fundamentally biologically inspired, mimicking how animals learn to interact with their environment through positive and negative reward feedback from trial-and-error experience.

Скачать Data-Driven Science and Engineering: Machine Learning, Dynamical Systems, and Control 2nd Edition








Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.