Автор: Mohamed Chetouani, Virginia Dignum, Paul Lukowicz, Carles Sierra
Издательство: Springer
Серия: Lecture Notes in Artificial Intelligence
Год: 2023
Страниц: 434
Язык: английский
Формат: pdf (true)
Размер: 34.6 MB
As a discipline, human-centered AI (HCAI) aims to create Artificial Intelligence (AI) systems that collaborate with humans, enhancing human capabilities and empowering humans to achieve their goals. That is, the focus amplify and augment rather than displace human abilities. HCAI seeks to preserve human control in a way that ensures artificial intelligence meets our needs while also operating transparently, delivering equitable outcomes, and respecting human rights and ethical standards. Design methods that enable representation of and adherence to values such as privacy protection, autonomy (human in control), and non-discrimination are core to HCAI. These are themes closely connected to some of the most fundamental challenges of AI.
Artificial neural networks provide a distributed computing technology that can be trained to approximate any computable function, and have enabled substantial advances in areas such as computer vision, robotics, speech recognition and natural language processing. This chapter provides an introduction to Artificial Neural Networks, with a review of the early history of perceptron learning. It presents a mathematical notation for multi-layer neural networks and shows how such networks can be iteratively trained by back-propagation of errors using labeled training data. It derives the back-propagation algorithm as a distributed form of gradient descent that can be scaled to train arbitrarily large networks given sufficient data and computing power.
Black-box Artificial Intelligence (AI) systems for automated decision making are often based on over (big) human data, map a user’s features into a class or a score without exposing why. This is problematic for the lack of transparency and possible biases inherited by the algorithms from human prejudices and collection artefacts hidden in the training data, leading to unfair or wrong decisions. The future of AI lies in enabling people to collaborate with machines to solve complex problems. This requires good communication, trust, clarity, and understanding, like any efficient collaboration. Explainable AI (XAI) addresses such challenges, and for years different AI communities have studied such topics, leading to different definitions, evaluation protocols, motivations, and results. This chapter provides a reasoned introduction to the work of Explainable AI to date and surveys the literature focusing on symbolic AI-related approaches. We motivate the needs of XAI in real-world and large-scale applications while presenting state-of-the-art techniques and best practices and discussing the many open challenges.
Artificial intelligence, and in particular Machine Learning methods, is fast gaining ground. Algorithms trained on large datasets and comprising numerous hidden layers, with up to a trillion parameters, are becoming common. Such models are difficult to explain to lay users with little understanding of the basis of machine learning, but they are also hard to interpret for those who designed and programmed them. The calculations that are carried out by the algorithm are not assigned an easily understandable meaning, aside from there being far too many of these calculations to actually follow. The outputs of algorithms are, as a result, hard to predict and to explain. Why did the algorithm output that there is a cat on this picture? We don’t really know, certainly not without additional help in the form of explainability tools.
The first section, Introduction to Human-centered AI, presents the main definitions and concepts covered in this volume.
The second section, Human-centered Machine Learning, includes several chapters on machine learning ranging from basic concepts of neural networks to interactive learning. This section also describes modern approaches such as transformers in natural language processing, speech processing, vision and multi-modal processing.
The third section, Explainable AI, deals with both technical and philosophical concepts. The section includes a conceptual overview of computational cognitive vision together with practical demonstrations.
The fourth section, ethics, law and society AI, introduces main concepts of Ethics and Law. This section also discusses ethics in communication.
The fifth section, Argumentation, focuses on concepts of arguments and attacks. The concepts are illustrated with several concrete examples in cognitive technologies of learning and explainable inference or decision making.
The last section, Social Simulation, deals with agent-based social simulations that are used to investigate complex phenomena within social systems. The chapters show how they could be designed, evaluated and employed by decision makers.
Скачать Human-Centered Artificial Intelligence: Advanced Lectures