Автор: Pethuru Raj, Utku Köse, Usha Sakthivel, Susila Nagarajan
Издательство: The Institution of Engineering and Technology
Год: 2023
Страниц: 530
Язык: английский
Формат: pdf (true)
Размер: 30.0 MB
The world is keen to leverage multi-faceted AI techniques and tools to deploy and deliver the next generation of business and IT applications. Resource-intensive gadgets, machines, instruments, appliances, and equipment spread across a variety of environments are empowered with AI competencies. Connected products are collectively or individually enabled to be intelligent in their operations, offering and output.
AI is being touted as the next-generation technology to visualize and realize a bevy of intelligent systems, networks and environments. However, there are challenges associated with the huge adoption of AI methods. As we give full control to AI systems, we need to know how these AI models reach their decisions. Trust and transparency of AI systems are being seen as a critical challenge. Building knowledge graphs and linking them with AI systems are being recommended as a viable solution for overcoming this trust issue and the way forward to fulfil the ideals of explainable AI.
The authors focus on explainable AI concepts, tools, frameworks and techniques. To make the working of AI more transparent, they introduce knowledge graphs (KG) to support the need for trust and transparency into the functioning of AI systems. They show how these technologies can be used towards explaining data fabric solutions and how intelligent applications can be used to greater effect in finance and healthcare.
Explainable Artificial Intelligence (XAI) is a set of promising algorithms and approaches that empower users to comprehend why and how AI models reach a particular decision. These AI models must be penetrative, pervasive and persuasive, trustworthy, and transparent. XAI plays a significant role in ensuring a heightened confidence in the recommendations made by AI systems.
A growing number of AI models are being built and transitioned into microservices, which are front ended with flexible application programming interfaces (APIs). They are increasingly used to automate several essential activities (personal, social, and professional). They are being exposed as a service to be publicly discoverable, network-accessible, accessed, assessed, and leveraged by a bevy of service clients running across a variety of input and output devices. They provide scores of next-generation services including prediction, recommendation, detection, recognition, translation, summarization, and classification. Besides empowering software products and services to exhibit intelligent behaviors, AI is also becoming essential in IT infrastructure modules, platform software, middleware solutions, database systems, electronic devices, machineries in manufacturing floors, medical instruments, defense equipment, wares, and merchandizes to be cognitive in their operations, outputs, and offerings.
Explainable Artificial Intelligence (XAI): Concepts, enabling tools, technologies and applications is aimed primarily at industry and academic researchers, scientists, engineers, lecturers and advanced students in the fields of IT and Computer Science, soft computing, AI/ML/DL, Data Science, semantic web, knowledge engineering and IoT. It will also prove a useful resource for software, product and project managers and developers in these fields.
Скачать Explainable Artificial Intelligence (XAI): Concepts, enabling tools, technologies and applications