AI Model Interpretability Measure

From GM-RKB
(Redirected from model interpretability)
Jump to navigation Jump to search

A AI Model Interpretability Measure is an interpretability measure for a system model, specifically assessing how easily a human can understand the model's internal mechanics or logic and its predictions. An Interpretable Predictive Model is a predictive model with a relatively high model interpretability measure value, meaning it is designed to be comprehensible to humans.



References

2024

[1] https://datascience.aero/explainability-interpretability-what-model-need/
[2] https://www.kdnuggets.com/2018/12/machine-learning-explainability-interpretability-ai.html
[3] https://datascience.stackexchange.com/questions/99808/an-example-of-explainable-but-not-interpretable-ml-model
[4] https://blogs.sas.com/content/hiddeninsights/2022/08/10/interpretability-vs-explainability-the-black-box-of-machine-learning/
[5] https://christophm.github.io/interpretable-ml-book/
[6] https://link.springer.com/chapter/10.1007/978-3-031-04083-2_2
[7] https://docs.aws.amazon.com/whitepapers/latest/model-explainability-aws-ai-ml/interpretability-versus-explainability.html
[8] https://www.ibm.com/topics/explainable-ai
[9] https://quiq.com/blog/explainability-vs-interpretability/
[10] https://www.datacamp.com/tutorial/explainable-ai-understanding-and-trusting-machine-learning-models 
[11] https://datascience.stackexchange.com/questions/70164/what-is-the-difference-between-explainable-and-interpretable-machine-learning

2020

2017

  • (Lundberg & Lee, 2017) ⇒ Scott M. Lundberg, and Su-In Lee. (2017). “A Unified Approach to Interpreting Model Predictions.” In: Proceedings of the 31st International Conference on Neural Information Processing Systems.
    • QUOTE: ... The ability to correctly interpret a prediction model’s output is extremely important. It engenders appropriate user trust, provides insight into how a model may be improved, and supports understanding of the process being modeled. In some applications, simple models (e.g., linear models) are often preferred for their ease of interpretation, even if they may be less accurate than complex ones. However, the growing availability of big data has increased the benefits of using complex models, so bringing to the forefront the trade-off between accuracy and interpretability of a model’s output. ...

2015

2015

2014

2012