2018 RuleInductionforGlobalExplanati
- (Sushil et al., 2018) ⇒ Madhumita Sushil, Simon Suster, and Walter Daelemans. (2018). “Rule Induction for Global Explanation of Trained Models.” In: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. doi:10.18653/v1/W18-5411
Subject Headings: Rule Induction Algorithm; Neural Network Rule Induction Algorithm.
Notes
- Other versions of this paper:
Cited By
- Google Scholar: ~ 3 Citations (Retrieved: 2019-11-03).
- Semantic Scholar: ~ 1 Citations (Retrieved: 2019-11-03).
- MS Academic: ~ 1 Citations (Retrieved: 2019-11-03).
Quotes
Abstract
Understanding the behavior of a trained network and finding explanations for its outputs is important for improving the network's performance and generalization ability, and for ensuring trust in automated systems. Several approaches have previously been proposed to identify and visualize the most important features by analyzing a trained network. However, the relations between different features and classes are lost in most cases. We propose a technique to induce sets of if-then-else rules that capture these relations to globally explain the predictions of a network. We first calculate the importance of the features in the trained network. We then weigh the original inputs with these feature importance scores, simplify the transformed input space, and finally fit a rule induction model to explain the model predictions. We find that the output rule-sets can explain the predictions of a neural network trained for 4-class text classification from the 20 newsgroups dataset to a macro-averaged F-score of 0.80. We make the code available at https://github.com/clips/interpret_with_rules.
1 Introduction
2 Related Work
3 Methodology
(...)
Our proposed technique comprises of these main steps:
- 1. Input saliency computation (§ 3.1)
- 2. Input transformation and selection (§ 3.2)
- 3. Rule induction (§ 3.3)
The entire pipeline is depicted in Figure 1.
[[File:2018_RuleInductionforGlobalExplanati_Fig1.png|350px|thumb|center|Figure 1: Pipeline for rule induction for global model interpretability.
3.1 Input Saliency Computation
(...)
3.2 Input Transformation And Selection
(...)
3.3 Rule Induction
(...)
4 Experimental Details
4.1 Data
4.2 Model To Be Explained
4.3 Metrics
4.4 Hyperparameter Optimization
5 Results And Discussion
5.1 Rules As Explanations
5.2 Consistency Of The Induced Rule-Sets
6 Limitations
7 Conclusions And Future Work
A Appendix
References
;Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2018 RuleInductionforGlobalExplanati | Madhumita Sushil Simon Suster Walter Daelemans | Rule Induction for Global Explanation of Trained Models | 10.18653/v1/W18-5411 | 2018 |