Автор: Zhi-Hua Zhou, Yang Yu, Chao Qian
Издательство: Springer
Год: 2019
Страниц: 361
Язык: английский
Формат: pdf (true), djvu
Размер: 10.1 MB
Many Machine Learning (ML) tasks involve solving complex optimization problems, such as working on non-differentiable, non-continuous, and non-unique objective functions; in some cases it can prove difficult to even define an explicit objective function. Evolutionary learning applies evolutionary algorithms to address optimization problems in machine learning, and has yielded encouraging outcomes in many applications. However, due to the heuristic nature of evolutionary optimization, most outcomes to date have been empirical and lack theoretical support. This shortcoming has kept evolutionary learning from being well received in the machine learning community, which favors solid theoretical approaches.
Machine Learning is a central topic of Artificial Intelligence (AI). It aims at learning generalizable models from data, with which the system performance can be improved. As nowadays data can be continually acquired and accumulated in numerous applications such as image recognition and commodity recommendation, machine learning has been playing an increasingly important role in the successes of these applications.
To learn a model, a set of instances (called a training set) is required. If each instance is given with a target output, the learning falls into the type of supervised learning, which is to learn a model mapping from the instance space to the target space. According to the type of the target variable, supervised learning tasks can be roughly categorized into classification and regression tasks. The target variable takes a finite number of categories in classification and is continuous in regression. Typical supervised learning algorithms include decision trees, artificial neural networks, support vector machines, etc.
Recently there have been considerable efforts to address this issue. This book presents a range of those efforts, divided into four parts. Part I briefly introduces readers to evolutionary learning and provides some preliminaries, while Part II presents general theoretical tools for the analysis of running time and approximation performance in evolutionary algorithms. Based on these general tools, Part III presents a number of theoretical findings on major factors in evolutionary optimization, such as recombination, representation, inaccurate fitness evaluation, and population. In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
Скачать Evolutionary Learning: Advances in Theories and Algorithms