Автор: Vladimir Vovk, Alexander Gammerman, Glenn Shafer
Издательство: Springer
Год: 2022
Страниц: 490
Язык: английский
Формат: pdf (true), epub
Размер: 29.3 MB
This book is about conformal prediction, an approach to prediction that originated in Machine Learning. The main feature of conformal prediction is the principled treatment of the reliability of predictions. The prediction algorithms described ― conformal predictors ― are provably valid in the sense that they evaluate the reliability of their own predictions in a way that is neither over-pessimistic nor over-optimistic (the latter being especially dangerous). The approach is still flexible enough to incorporate most of the existing powerful methods of Machine Learning. The book covers both key conformal predictors and the mathematical analysis of their properties.
This book is about prediction algorithms that learn. The predictions these algorithms make are often imperfect, but they improve over time, and they are hedged: they incorporate a valid indication of their own accuracy and reliability. In most of the book we make the standard assumption of randomness: the examples the algorithm sees are drawn from some probability distribution, independently of one another. The main novelty of the book is that our algorithms learn and predict simultaneously, continually improving their performance as they make each new prediction and find out how accurate it is. It might seem surprising that this should be novel, but most existing algorithms for hedged prediction first learn from a training dataset and then predict without ever learning again. The few algorithms that do learn and predict simultaneously do not produce hedged predictions. In later chapters we relax the assumption of randomness to the assumption that the data come from an online compression model. We have written the book for researchers in and users of the theory of prediction under randomness, but it may also be useful to those in other disciplines who are interested in prediction under uncertainty.
The rapid development of computer technology during the last several decades has made it possible to solve ever more difficult problems in a wide variety of fields. The development of software has been essential to this progress. The painstaking programming in machine code or assembly language that was once required to solve even simple problems has been replaced by programming in high-level object-oriented languages. We are concerned with the next natural step is this progression—the development of programs that can learn, i.e., automatically improve their performance with experience.
Recognition, diagnosis, and estimation can all be thought of as special cases of prediction. A person or a computer is given certain information and asked to predict the answer to a question. A broad discussion of learning would go beyond prediction to consider the problems faced by a robot, who needs to act as well as predict. The literature on Machine Learning, has emphasized prediction, however, and the research reported in this book is in that tradition. We are mostly interested in algorithms that learn to predict well.
Of course, not all work in Machine Learning is concerned with learning under randomness. In learning with expert advice, for example, randomness is replaced by a game-theoretic set-up; here a typical result is that the learner can predict almost as well as the best strategy in a pool of possible strategies. In reinforcement learning, which is concerned with rational decision-making in a dynamic environment, the standard assumption is Markovian. In this book, we will consider extensions of learning under randomness in Parts III and IV.
Скачать Algorithmic Learning in a Random World, 2nd Edition