Demystifying Artificial Intelligence: Symbolic, Data-Driven, Statistical and Ethical AI

Автор: literator от 13-07-2024, 17:00, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Demystifying Artificial Intelligence: Symbolic, Data-Driven, Statistical and Ethical AI
Автор: Emmanuel Gillain
Издательство: De Gruyter
Год: 2024
Страниц: 476
Язык: английский
Формат: pdf (true), epub
Размер: 34.1 MB

This book is intended for business professionals that want to understand the fundamental concepts of Artificial Intelligence (AI), their applications and limitations. Built as a collaborative effort between academia and the industry, this book bridges the gap between theory and business application, demystifying AI through fundamental concepts and industry examples. The reader will find here an overview of the different AI techniques to search, plan, reason, learn, adapt, understand and interact. The book covers the two traditional paradigms in AI: the statistical and data-driven AI systems, which learn and perform by ingesting millions of data points into Machine Learning algorithms, and the consciously modelled AI systems, known as symbolic AI systems, which use explicit symbols to represent the world and make conclusions. Rather than opposing those two paradigms, the book will also show how those different fields can complement each other.

Out of the different fields of AI, much of the attention over the last years has been focused on the field of Machine Learning (ML), its subfield artificial neural network with Deep Learning, and natural language processing (NLP).

Developing performant traditional Machine Learning systems require a pipeline of tasks with multiple choices and fine-tuning decisions to achieve the most optimal result (the predictors, the model itself with configuration and hyperparameters, etc.). The search space to find the optimal parameters is sometimes complex with multiple dimensions, so that mathematicians would classify the data scientist’s tasks as “high-dimensional combinatorial optimization” tasks. automated machine learning (AutoML) is an idea that emerged in the 1990s and whose objective is essentially to automate the generation and selection of the most performing algorithms and optimize their performance without the help of data scientists: a data scientist applying AutoML techniques would typically only require a couple of lines of codes to test multiple models with multiple hyperparameters in parallel, and let the algorithm select the best model under some defined quality metrics. These methods have progressively been making their way into standard commercial products as a productivity tool that helps data scientists work faster and better. AutoML techniques clearly speed up the model development cycle, often with more performant models.

Supported by an ever-increasing amount of data and processing power to train the algorithms, those research efforts resulted in a series of breakthroughs that can be illustrated by impressive progress made across very different fields like image classification, object detection, or image generation in computer vision, machine translation, the creation or recognition of speech, text understanding and writing, complex board games, etc.

“Demystifying AI reveals its true power: not as a mysterious force, but as a tool for human progress, accessible to all who seek to understand it.” - Dr. Barak Chizi, Chief Data&Analytics Officer, KBC Group

Скачать Demystifying Artificial Intelligence: Symbolic, Data-Driven, Statistical and Ethical AI








Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.