Автор: Amina Adadi, Afaf Bouhoute
Издательство: CRC Press
Год: 2024
Страниц: 286
Язык: английский
Формат: pdf (true)
Размер: 28.6 MB
Artificial Intelligence (AI) and Machine Learning (ML) are set to revolutionize all industries, and the Intelligent Transportation Systems (ITS) field is no exception. While ML, especially Deep Learning models, achieve great performance in terms of accuracy, the outcomes provided are not amenable to human scrutiny and can hardly be explained. This can be very problematic, especially for systems of a safety-critical nature such as transportation systems. Explainable AI (XAI) methods have been proposed to tackle this issue by producing human interpretable representations of machine learning models while maintaining performance. These methods hold the potential to increase public acceptance and trust in AI-based ITS.
Artificial Intelligence (AI), particularly Machine and Deep Learning, has been significantly advancing Intelligent Transportation Systems (ITS) research and industry. Due to their ability to recognize and to classify patterns in large datasets, AI algorithms have been successfully applied to address the major problems and challenges associated with traffic management and autonomous driving, e.g., sensing, perception, prediction, detection, and decision-making. However, in their current incarnation, AI models, especially Deep Neural Networks (DNN), suffer from the lack of interpretability. Indeed, the inherent structure of the DNN is not intrinsically set up for providing insights into their internal mechanism of work. This hinders the use and acceptance of these “black-box” models in systems of a safety-critical nature like ITS. Transportation usually involves life-death decisions; entrusting such important decisions to a system that cannot explain or justify itself presents obvious dangers. Hence, explainability and ethical AI are becoming subjects to scrutiny in the context of intelligent transportation.
Explainable Artificial Intelligence (XAI) is an emergent research field that aims to make AI models’ results more human-interpretable without sacrificing performance. XAI is regarded as a key enabler of ethical and sustainable AI adoption in transportation. In contrast with “black-box” systems, explainable and trustworthy intelligent transport systems will lend themselves to easy assessments and control by system designers and regulators. This would pave the way for easy and continual improvements leading to enhanced performance and security, as well as increased public trust.
Given its societal and technical implications, we believe that the field of XAI needs an in-depth investigation in the realm of ITS, especially in a post-pandemic era. This book aims at compiling into a coherent structure the state-of-the-art research and development of explainable models for ITS applications.
Features:
Provides the necessary background for newcomers to the field (both academics and interested practitioners)
Presents a timely snapshot of explainable and interpretable models in ITS applications
Discusses ethical, societal, and legal implications of adopting XAI in the context of ITS
Identifies future research directions and open problems
Скачать Explainable Artificial Intelligence for Intelligent Transportation Systems