|
- Interpretable Machine Learning - Christoph Molnar
The book has also been the foundation of my own career; first, it inspired me to do a PhD on interpretable machine learning, and later it encouraged me to become a self-employed writer, educator, and consultant
- 2 Interpretability – Interpretable Machine Learning
What it means for interpretable machine learning: The explanation should predict the event as truthfully as possible, which in machine learning is sometimes called fidelity
- 1 Introduction – Interpretable Machine Learning
Interpretable Machine Learning, or Explainable AI, has really exploded as a field around 2015 (Molnar, Casalicchio, and Bischl 2020) Especially the subfield of model-agnostic interpretability, which offers methods that work for any model, gained a lot of attention
- 4 Methods Overview – Interpretable Machine Learning
Figure 4 4: The big picture of (model-agnostic) interpretable machine learning The real world goes through many layers before it reaches the human in the form of explanations
- 18 SHAP – Interpretable Machine Learning - Christoph Molnar
SHAP connects LIME and Shapley values This is very useful to better understand both methods It also helps to unify the field of interpretable machine learning SHAP has a fast implementation for tree-based models I believe this was key to the popularity of SHAP because the biggest barrier for adoption of Shapley values is the slow computation
- Appendix A — Machine Learning Terms – Interpretable Machine Learning - Christoph Molnar
Interpretable Machine Learning refers to methods and models that make the behavior and predictions of machine learning systems understandable to humans A Dataset is a table with the data from which the machine learns
- 16 Scoped Rules (Anchors) – Interpretable Machine Learning
Authors: Tobias Goerke Magdalena Lang (with later edits from Christoph Molnar) The anchors method explains individual predictions of any black box classification model by finding a decision rule that “anchors” the prediction sufficiently A rule anchors a prediction if changes in other feature values do not affect the prediction Anchors utilizes reinforcement learning techniques in
- 3 Goals of Interpretability – Interpretable Machine Learning
Interpretable machine learning is useful not only for learning about the data, but also for learning about the model For example, if you want to learn about how convolutional neural networks work, you can use interpretability to study what concepts individual neurons react to
|
|
|