Автор: Bin Shi, S.S. Iyengar
Издательство: Springer
Год: 2020
Страниц: 138
Язык: английский
Формат: pdf (true), epub
Размер: 15.2 MB
This book studies mathematical theories of Machine Learning (ML). The first part of the book explores the optimality and adaptivity of choosing step sizes of gradient descent for escaping strict saddle points in non-convex optimization problems. In the second part, the authors propose algorithms to find local minima in nonconvex optimization and to obtain global minima in some degree from the Newton Second Law without friction. In the third part, the authors study the problem of subspace clustering with noisy and missing data, which is a problem well-motivated by practical applications data subject to stochastic Gaussian noise and/or incomplete data with uniformly missing entries. In the last part, the authors introduce an novel VAR model with Elastic-Net regularization and its equivalent Bayesian model allowing for both a stable sparsity and a group selection.
One of the most interesting topics of research with the potential to change the way the world is headed is machine learning and the associated techniques. However, in the current state of the art, the machine learning research does not have a solid theoretical framework that could form the basis for the analysis and provide guidelines for the experimental runs. This book is an attempt to identify and address the existing issues in the respective field of great research interest in the modern outlook on machine learning, artificial intelligence, deep neural networks, etc. For all the great wonders that these abovementioned techniques can do, it is still a mystery as to how to use the basic concepts they so highly depend on. Gradient descent is one of the popular techniques that has been widely deployed in training any neural network. One of the challenges that erupts while using gradient descent is the absence of guidelines on when they converge, be it to local or a global minima. In this book, we have attempted to address this crucial problem. This book offers to the readers novel theoretical frameworks that could be used in analyzing the convergence behavior.
This book also represents a major contribution in terms of mathematical aspects of machine learning by the authors and collaborators . Throughout the book, we have made sure the reader gets a good understanding and feel of the theoretical frameworks that are and can be employed in the gradient descent technique and the ways of deploying them in the training of neural networks. To emphasize this, we have used results from some of our recent research along with a blend of what is being explored by other researchers. As the readers read through the chapters of the book, they would be exposed to the various applications of great importance, like the subspace clustering and time series analysis. This book thus tries to strike a balance in the way theory is presented along with some of the applications that come hand in hand with it. Through this book, we hope to make the reading more exciting and also have a huge impact on the readers by providing them with the right tools in the machine learning domain.
- Provides a thorough look into the variety of mathematical theories of machine learning
- Presented in four parts, allowing for readers to easily navigate the complex theories
- Includes extensive empirical studies on both the synthetic and real application time series data
Скачать Mathematical Theories of Machine Learning - Theory and Applications