Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness

Автор: literator от 17-08-2023, 05:25, Коментариев: 0

Категория: КНИГИ » ПРОГРАММИРОВАНИЕ

Название: Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness
Автор: Toju Duke
Издательство: Apress
Год: 2023
Страниц: 193
Язык: английский
Формат: pdf, epub, mobi
Размер: 10.1 MB

This book introduces a Responsible AI framework and guides you through processes to apply at each stage of the machine learning (ML) life cycle, from problem definition to deployment, to reduce and mitigate the risks and harms found in artificial intelligence (AI) technologies. AI offers the ability to solve many problems today if implemented correctly and responsibly. This book helps you avoid negative impacts – that in some cases have caused loss of life – and develop models that are fair, transparent, safe, secure, and robust.

The approach in this book raises your awareness of the missteps that can lead to negative outcomes in AI technologies and provides a Responsible AI framework to deliver responsible and ethical results in ML. It begins with an examination of the foundational elements of responsibility, principles, and data. Next comes guidance on implementation addressing issues such as fairness, transparency, safety, privacy, and robustness. The book helps you think responsibly while building AI and ML models and guides you through practical steps aimed at delivering responsible ML models, datasets, and products for your end users and customers.

Although AI has introduced several amazing, mind-blowing inventions in the world, led by deep learning and neural networks, it has also raised yet another problem—unexplainable systems. This is where the calculation process of the algorithm turns it into what is commonly referred to as a “black box.” These black boxes make the ML model so opaque that it becomes impossible to interpret. Even the engineers, developers, or research/data scientists who created the algorithm can’t understand or explain the reasons behind the algorithm’s decision or output. There are certain use cases that demand some form of interpretability on the model’s decision-making process, for example a financial service tool that supports a loan approval process. Such a tool requires proper vetting for bias, which needs an auditable and explainable system in order to be successful in regulatory inspections and tests and to provide human control over the decision of the agent. The European Union regulation gives consumers the “right to explanation of the decision reached after such assessment and to challenge the decision” if it was affected by AI algorithms.

For those reasons and more, it’s crucial to understand how an AI system arrives at a specific output. You must be able to ensure the system is working as intended and that it meets regulatory requirements, and you must be able to provide an explanation to the end user who was affected by the decision. Explainable AI (XAI) is a set of tools and frameworks that provides explanations and interpretations of the predictions/output of an ML model. XAI helps debug and improve model performance and helps promote user trust, auditablility, and the overall use of AI.

What You Will Learn:
Build AI/ML models using Responsible AI frameworks and processes
Document information on your datasets and improve data quality
Measure fairness metrics in ML models
Identify harms and risks per task and run safety evaluations on ML models
Create transparent AI/ML models
Develop Responsible AI principles and organizational guidelines

Who This Book Is For:
AI and ML practitioners looking for guidance on building models that are fair, transparent, and ethical; those seeking awareness of the missteps that can lead to unintentional bias and harm from their AI algorithms; policy makers planning to craft laws, policies, and regulations that promote fairness and equity in automated algorithms

Скачать Building Responsible AI Algorithms: A Framework for Transparency, Fairness, Safety, Privacy, and Robustness








Нашел ошибку? Есть жалоба? Жми!
Пожаловаться администрации
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
Мы рекомендуем Вам зарегистрироваться либо войти на сайт под своим именем.
Информация
Посетители, находящиеся в группе Гости, не могут оставлять комментарии к данной публикации.