Describe and implement regression and generalized interpretable models
Demonstrate knowledge of decision trees, rules, and interpretable neural networks
Explain foundational Mechanistic Interpretability concepts, hypotheses, and experiments
Earn a shareable certificate to add to your LinkedIn profile.
Learn new concepts from industry experts
Gain a foundational understanding of a subject or tool
Develop job-relevant skills with hands-on projects
Earn a shareable career certificate
In this module, you will be introduced to the concepts of regression and generalized models for interpretability. You will learn how to describe interpretable machine learning and differentiate between interpretability and explainability, explain and implement regression models in Python, and demonstrate knowledge of generalized models in Python. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
In this module, you will be introduced to the concepts of decision trees, decision rules, and interpretability in neural networks. You will learn how to explain and implement decision trees and decision rules in Python and define and explain neural network interpretable model approaches, including prototype-based networks, monotonic networks, and Kolmogorov-Arnold networks. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.
In this module, you will be introduced to the concept of Mechanistic Interpretability. You will learn how to explain foundational Mechanistic Interpretability concepts, including features and circuits; describe the Superposition Hypothesis; and define Representation Learning to be able to analyze current research on scaling Representation Learning to LLMs. You will apply these learnings through discussions, guided programming labs, and a quiz assessment.