SHAP (SHapley Additive exPlanations)
Jump to navigation
Jump to search
A SHAP (SHapley Additive exPlanations) is a supervised ML model explanation algorithm that ...
- See: Shapley Value, LIME.
References
2024
- Perplexity: Interpretability vs. Explainability:
- ... Explainability methods like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations) work by approximating the behavior of the complex model locally around a specific prediction, using interpretable surrogate models or feature importance measures. These methods aim to explain why a particular input instance received the output it did, rather than fully elucidating the global logic of the entire model.
- ...
- Layer-wise Relevance Propagation (LRP): This technique propagates the prediction backward through the network to determine the contribution of each input feature to the final decision[15].
- Counterfactual Explanations: These provide insights by showing how the model's output would change if certain inputs were different[15].
- LIME: This method explains individual predictions by approximating the model locally with an interpretable model[15][19].
- SHAP: This approach uses game theory to explain the output of any machine learning model by assigning each feature an importance value for a particular prediction[15][19].
- Visualization Techniques: Data visualization can help in understanding the patterns and relationships in the data that the model has learned[20].
2020
- https://shap.readthedocs.io/en/latest/
- QUOTE: SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions (see papers for details and citations).
2017
- (Lundberg & Lee, 2017) ⇒ Scott M. Lundberg, and Su-In Lee. (2017). “A Unified Approach to Interpreting Model Predictions.” Advances in Neural Information Processing Systems 30