Explain and implement model-agnostic explainability methods.
Visualize and explain neural network models using SOTA techniques.
Describe emerging approaches to explainability in large language models (LLMs) and generative computer vision.
Earn a shareable certificate to add to your LinkedIn profile.
Learn new concepts from industry experts
Gain a foundational understanding of a subject or tool
Develop job-relevant skills with hands-on projects
Earn a shareable career certificate
In this module, you will be introduced to the concept of model-agnostic explainability and will explore techniques and approaches for local and global explanations. You will learn how to explain and implement local explainability techniques LIME, SHAP, and ICE plots, global explainable techniques including functional decomposition, PDP, and ALE plots, and example-based explanations in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
In this module, you will be introduced to the concept of explainable deep learning and will explore techniques and approaches for explaining neural networks. You will learn how to explain and implement neural network visualization techniques, demonstrate knowledge of activation vectors in Python, and recognize and critique interpretable attention and saliency methods. You will apply these learnings through discussions, guided programming labs and case studies, and a quiz assessment.
In this module, you will be introduced to the concept of explainable generative AI. You will learn how to explain emerging approaches to explainability in LLMs, generative computer vision, and multimodal models. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.