Responsible AI in Practice: A Practical Guide to Safe and Human AI

Автор: literator от 26-01-2025, 15:37, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Responsible AI in Practice: A Practical Guide to Safe and Human AI
Автор: Toju Duke, Paolo Giudici
Издательство: Apress
Год: 2025
Страниц: 216
Язык: английский
Формат: pdf, epub (true), mobi
Размер: 10.1 MB

This book is the first practical book on AI risk assessment and management. It will enable you to evaluate and implement safe and accurate AI models and applications. The book features risk assessment frameworks, statistical metrics and code, a risk taxonomy curated from real-world case studies, and insights into AI regulation and policy, and is an essential tool for AI governance teams, AI auditors, AI ethicists, Machine Learning (ML) practitioners, Responsible AI practitioners, and Computer Science and Data Science students building safe and trustworthy AI systems across businesses, organizations, and universities.

The centerpiece of this book is a risk management and assessment framework titled “Safe Human-centered AI (SAFE-HAI),” which highlights AI risks across the following Responsible AI principles: accuracy, sustainability and robustness, explainability, transparency and accountability, fairness, privacy and human rights, human-centered AI, and AI governance. Using several statistical metrics such as Area Under Curve (AUC), Rank Graduation Accuracy, and Shapley values, you will learn to apply Lorenz curves to measure risk and inequality across the different principles and will be equipped with a taxonomy/scoring rubric to identify and mitigate identified risks.

Robustness is another fundamental and important principle of Responsible AI and AI performance in general. Robustness refers to the sturdiness of an ML model to withstand uncertainties (perturbations or adversarial conditions) such as cyber attacks, and the ability to perform accurately in different contexts. Given the current concerns and unending systemic risks identified in AI systems, particularly Generative AI, building robust ML models is not only advised but critical. An AI application will be deemed robust if the average RGR is significantly higher than a given bound (determined by a set risk appetite). We can also evaluate whether the perturbation significantly deteriorates the prediction obtained with the original data, with a similar resampling test, which is implemented in the Python code referenced in the case study and provided in the appendix of the book.

In the Machine Learning research community, the growing importance of the explainability requirement has led to an increase in publications and workshops, including conferences dedicated to Explainable Artificial Intelligence (XAI), among which is the World Conference on Explainable AI. Indeed, some Machine Learning models are explainable by design and don’t need further refinements to be explainable. For example, linear and logistic regression have made recent advances thanks to feature selection optimization methods such as Lasso and Ridge. Other models such as tree models, random forests, bagging, and boosting models are much less explainable. However, these models have a built-in mechanism referred to as the feature importance plot that calculates the importance of each variable in terms of the splits (or average splits) it generates on the tree (or forest) model.

Скачать Responsible AI in Practice: A Practical Guide to Safe and Human AI




ОТСУТСТВУЕТ ССЫЛКА/ НЕ РАБОЧАЯ ССЫЛКА ЕСТЬ РЕШЕНИЕ, ПИШИМ СЮДА!


Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.