Variational Bayesian Learning Theory

Автор: literator от 7-01-2020, 04:47, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Variational Bayesian Learning Theory
Автор: Shinichi Nakajima, Kazuho Watanabe
Издательство: Cambridge University Press
Год: 2019
Страниц: 561
Язык: английский
Формат: pdf (true)
Размер: 10.1 MB

Variational Bayesian learning is one of the most popular methods in Machine Learning (ML). Designed for researchers and graduate students in Machine Learning, this book summarizes recent developments in the non-asymptotic and asymptotic theory of variational Bayesian learning and suggests how this theory can be applied in practice. The authors begin by developing a basic framework with a focus on conjugacy, which enables the reader to derive tractable algorithms. Next, it summarizes non-asymptotic theory, which, although limited in application to bilinear models, precisely describes the behavior of the variational Bayesian solution and reveals its sparsity inducing mechanism. Finally, the text summarizes asymptotic theory, which reveals phase transition phenomena depending on the prior setting, thus providing suggestions on how to set hyperparameters for particular purposes. Detailed derivations allow readers to follow along without prior knowledge of the mathematical techniques specific to Bayesian learning.

Bayesian learning is a statistical inference method that provides estimators and other quantities computed from the posterior distribution—the conditional distribution of unknown variables given observed variables. Compared with point estimation methods such as maximum likelihood (ML) estimation and maximum a posteriori (MAP) learning, Bayesian learning has the following advantages:

- Theoretically optimal.
- Uncertainty information is available.
- Model selection and hyperparameter estimation can be performed in a single framework.
- Less prone to overfitting.

On the other hand, Bayesian learning has a critical drawback—computing the posterior distribution is computationally hard in many practical models. This is because Bayesian learning requires expectation operations or integral computations, which cannot be analytically performed except for simple cases. Accordingly, various approximation methods, including deterministic and sampling methods, have been proposed. Variational Bayesian (VB) learning is one of the most popular deterministic approximation methods to Bayesian learning.

'This book presents a very thorough and useful explanation of classical (pre deep learning) mean field variational Bayes. It covers basic algorithms, detailed derivations for various models (eg matrix factorization, GLMs, GMMs, HMMs), and advanced theory, including results on sparsity of the VB estimator, and asymptotic properties (generalization bounds).' Kevin Murphy, Research scientist, Google Brain

Скачать Variational Bayesian Learning Theory




ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!


Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.