====== Lab2: Inherently interpretable models ====== ===== Resources ===== * Colab: [[https://colab.research.google.com/drive/1c2xaFJZwbXo74kPXLO6-0iUX-t-EOQCL?usp=sharing|Colab]] * Additional reading: * [[https://christophm.github.io/interpretable-ml-book/simple.html|Interpretable ML: Chapter 5]] * [[https://www.nature.com/articles/s42256-019-0048-x|Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead]] * Additional watching: * [[https://www.youtube.com/watch?v=ZFtnvXlWo0E|Nice video explaining how to obtain standard errors of LR estimates]] * Tools: * [[https://pyro.ai/|Pyro]] * [[https://pgmpy.org/|PGMPy]] * [[https://causalnex.readthedocs.io/en/latest/|CausalNex]] * [[https://dtai.cs.kuleuven.be/problog/|ProbLog]] -> Interested? More on that at: [[https://wiki.iis.uj.edu.pl/courses:knais:start|KnAIS]]