Explainable AI (XAI)

Build Ethical and Transparent AI Systems. Master skills in explainability techniques and ethical AI development to create trustworthy and transparent machine learning solutions.

Instructor: Brinnae Bent, PhD

Intermediate Level • 1 month at 10 hours a week • Flexible Schedule

What You'll Learn

  • Implement XAI approaches to enhance transparency, trust, robustness, and ethics in decision-making processes.
  • Build interpretable models in Python, including decision trees, regression models, and neural networks.
  • Apply advanced techniques like LIME, SHAP, and explore explainability for LLMs and computer vision models.

Skills You'll Gain

Image Analysis
Machine Learning
Machine Learning Methods
Artificial Intelligence
Predictive Modeling
Visualization (Computer Graphics)
Predictive Analytics
Applied Machine Learning
Generative AI
Large Language Modeling
Decision Tree Learning
Artificial Neural Networks

Shareable Certificate

Earn a shareable certificate to add to your LinkedIn profile

Outcomes

  • Learn in-demand skills from university and industry experts
  • Master a subject or tool with hands-on projects
  • Develop a deep understanding of key concepts
  • Earn a career certificate from Duke University

3 courses series

As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course provides a comprehensive introduction to Explainable AI (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles.Through discussions, case studies, and real-world examples, you will gain the following skills: 1. Define key XAI terminology and concepts, including interpretability, explainability, and transparency. 2. Evaluate different interpretable and explainable approaches, understanding their trade-offs and applications. 3. Integrate XAI explanations into decision-making processes for enhanced transparency and trust. 4. Assess XAI systems for robustness, privacy, and ethical considerations, ensuring responsible AI development. 5. Apply XAI techniques to cutting-edge areas like Generative AI, staying ahead of emerging trends. This course is ideal for AI professionals, data scientists, machine learning engineers, product managers, and anyone involved in developing or deploying AI systems. By mastering XAI, you'll be equipped to create AI solutions that are not only powerful but also interpretable, ethical, and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have experience building AI products and a basic understanding of machine learning concepts like supervised learning and neural networks. The course will cover explainable AI techniques and applications without deep technical details.

As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Interpretable Machine Learning, empowering you to develop AI solutions that are aligned with responsible AI principles. You will also gain an understanding of the emerging field of Mechanistic Interpretability and its use in understanding large language models.Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills: 1. Describe interpretable machine learning and differentiate between interpretability and explainability. 2. Explain and implement regression models in Python. 3. Demonstrate knowledge of generalized models in Python. 4. Explain and implement decision trees in Python. 5. Demonstrate knowledge of decision rules in Python. 6. Define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. 7. Explain foundational Mechanistic Interpretability concepts, including features and circuits 8. Describe the Superposition Hypothesis 9. Define Representation Learning and be able to analyze current research on scaling Representation Learning to LLMs. This course is ideal for data scientists or machine learning engineers who have a firm grasp of machine learning but have had little exposure to interpretability concepts. By mastering Interpretable Machine Learning approaches, you'll be equipped to create AI solutions that are not only powerful but also ethical and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have an intermediate understanding of machine learning concepts like supervised learning and neural networks.

As Artificial Intelligence (AI) becomes integrated into high-risk domains like healthcare, finance, and criminal justice, it is critical that those responsible for building these systems think outside the black box and develop systems that are not only accurate, but also transparent and trustworthy. This course is a comprehensive, hands-on guide to Explainable Machine Learning (XAI), empowering you to develop AI solutions that are aligned with responsible AI principles.Through discussions, case studies, programming labs, and real-world examples, you will gain the following skills: 1. Implement local explainable techniques like LIME, SHAP, and ICE plots using Python. 2. Implement global explainable techniques such as Partial Dependence Plots (PDP) and Accumulated Local Effects (ALE) plots in Python. 3. Apply example-based explanation techniques to explain machine learning models using Python. 4. Visualize and explain neural network models using SOTA techniques in Python. 5. Critically evaluate interpretable attention and saliency methods for transformer model explanations. 6. Explore emerging approaches to explainability for large language models (LLMs) and generative computer vision models. This course is ideal for data scientists or machine learning engineers who have a firm grasp of machine learning but have had little exposure to XAI concepts. By mastering XAI approaches, you'll be equipped to create AI solutions that are not only powerful but also interpretable, ethical, and trustworthy, solving critical challenges in domains like healthcare, finance, and criminal justice. To succeed in this course, you should have an intermediate understanding of machine learning concepts like supervised learning and neural networks.

Learner Testimonials

Felipe M.
Felipe M. • Learner since 2018

To be able to take courses at my own pace and rhythm has been an amazing experience. I can learn whenever it fits my schedule and mood.

Jennifer J.
Jennifer J. • Learner since 2020

I directly applied the concepts and skills I learned from my courses to an exciting new project at work.

Larry W.
Larry W. • Learner since 2021

When I need courses on topics that my university doesn't offer, Coursera is one of the best places to go.

Chaitanya A.
Chaitanya A. • Learner since 2727

Learning isn't just about being better at your job: it's so much more than that. Coursera allows me to learn without limits.