Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions

Автор: literator от 1-04-2023, 11:35, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Explainable AI for Practitioners: Designing and Implementing Explainable ML SolutionsНазвание: Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions
Автор: Michael Munn, David Pitman
Издательство: O’Reilly Media, Inc.
Год: 2023
Страниц: 279
Язык: английский
Формат: True PDF, True EPUB/Retail Copy
Размер: 39.0 MB

Most intermediate-level machine learning books focus on how to optimize models by increasing accuracy or decreasing prediction error. But this approach often overlooks the importance of understanding why and how your ML model makes the predictions that it does.

Explainability methods provide an essential toolkit for better understanding model behavior, and this practical guide brings together best-in-class techniques for model explainability. Experienced machine learning engineers and data scientists will learn hands-on how these techniques work so that you'll be able to apply these tools more easily in your daily workflow.

When developing Machine Learning (ML) models, I am sure all of you have asked the questions: Oh, how did it get that right? or That’s weird, why would it predict that? As software engineers, our first instinct is to trace through the code to find the answers. Unfortunately, this does not get us very far with ML models because their “code” is automatically generated, not human-readable, and may span a vast number (sometimes billions!) of parameters. One needs a special set of tools to understand ML models. Explainable AI (XAI) is a field of Machine Learning focused on developing and analyzing such tools.

Model explanations are not just a nice-to-have feature to satisfy our curiosities about how a model works. For practitioners, it is a must-have to ensure that they are not flying blind. Machine learning models are notorious for being right for the wrong reason. A classic example of this, discussed in this book, is that of a medical imaging model where explanations revealed that the model relied on “pen marks” on X-ray images to make disease predictions.

The rise of ML models in high-stakes decision-making has sparked a surge in the field of XAI with a plethora of techniques proposed across a variety of data modalities. The vast number of available techniques has been both a blessing and a curse for practitioners. At the heart of this issue is that there is no such thing as a perfect explanation. A good explanation must balance faithfulness to the model with human intelligibility and must offer meaningful insights. Achieving this is nontrivial. For instance, an explanation that translates the model into a giant mathematical formula is faithful but not intelligible, and hence not useful. Different explanation methods strike a different trade-off between faithfulness, human intelligibility, and computational efficiency. Furthermore, for any ML-based decision-making system, there are several stakeholders interested in explanations from different perspectives. For instance, end users may seek explanations to understand the factors behind the decisions they receive, while regulators may seek explanations to assess whether the model’s reasoning is sound and unbiased. All these nuances leave practitioners struggling to set up the appropriate explanation framework for their system. This book fills that gap.

This essential book provides:

A detailed look at some of the most useful and commonly used explainability techniques, highlighting pros and cons to help you choose the best tool for your needs
Tips and best practices for implementing these techniques
A guide to interacting with explainability and how to avoid common pitfalls
The knowledge you need to incorporate explainability in your ML workflow to help build more robust ML systems
Advice about explainable AI techniques, including how to apply techniques to models that consume tabular, image, or text data
Example implementation code in Python using well-known explainability libraries for models built in Keras and TensorFlow 2.0, PyTorch, and HuggingFace

Скачать Explainable AI for Practitioners: Designing and Implementing Explainable ML Solutions








Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.