Lab2: Inherently interpretable models
Resources
Colab:
Colab
Additional reading:
Interpretable ML: Chapter 5
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
Additional watching:
Nice video explaining how to obtain standard errors of LR estimates
Tools:
Pyro
PGMPy
CausalNex
ProbLog
→ Interested? More on that at:
KnAIS