RULEX Algorithm
(Redirected from RULEX)
Jump to navigation
Jump to search
A RULEX Algorithm is a Rule Induction Algorithm that extracts if-then rules from a trained CEBP Network.
- AKA: Andrews-Geva Rulex Algorithm, Andrews-Geva Rule Extraction Algorithm.
- Context:
- It was developed by Andrews & Geva (1995).
- It can be implemented by a RULEX System to solve a RULEX Task.
- It can also be basis for an Automated Rule Refinement System.
- Example(s)
- Counter-Example(s):
- See: Pattern Mining Algorithm, Decision Tree Induction Algorithm, Inductive Logic Programming, If-Then Rule, Firs-Order Logic Rule.
References
2005
- (Nayak, 2005) ⇒ Richi Nayak (2005, July). "Generating Predicate Rules from Neural Networks". In: Proceedings of International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2005). DOI:10.1007/11508069_31.
- QUOTE: Tables 1 and 2 report the relative overall performance of predicate rule sets utilising different algorithms. The average performance is determined by separately measuring the performance on each data set, and then calculating the average performance across all data sets, for each rule set. Several neural network learning techniques such as cascade correlation (CC), BpTower (BT) and constrained error back propagation (CEBP) are utilised to build networks. This is to show the the applicability of predicate (or restricted first-order) rule extraction to a variety of ANN architectures. The included results are after the application of pruning algorithm (P) to reduce the input space. The proposed rule extraction techniques LAP[1] and RulVI [2]; are applied on the cascade and BpTower ANNs. The Rulex[3] technique is applied to extract rules from the trained CEBPNs.
Predicate rules using | Accuracy Training (%) | Accuracy Testing (%) | Fidelity to the network (%) | |
---|---|---|---|---|
LAP | PCC | 98.28 | 95.05 | 99.04 |
PBT | 98.21 | 95.15 | 98.88 | |
RuleVI | PCC | 97.65 | 89.57 | 98.27 |
PBT | 97.59 | 84.71 | 96.87 | |
Rulex | CEBPN | 96.41 | 89.51 | 93.23 |
C4.5 | 96.99 | 94.05 | ||
Foil | 97.1 | 83.98 |
No of Conjunctive expressions | No of Predicate rules | ||
---|---|---|---|
LAP | PCC | 64 | 28 |
PBT | 63 | 21 | |
RuleVI | PCC | 39 | 18 |
PBT | 48 | 24 | |
Rulex | CEBPN | 4.6 | 4 |
C4.5 | 10 | ||
Foil | 8 |
2003
- (Andrews, 2003). ⇒ Robert Andrews (2003). "Automated Rule Refinement System". Doctoral dissertation, Queensland University of Technology.
- QUOTE: RULEX is a decompositional technique that extracts rectangular, propositional rules of the form:
IF [math]\displaystyle{ \quad \displaystyle \forall \;1 \leq i \leq n : x_i \in \left[ x_{i\;lower} , x_{i\;upper}\right] }[/math] |
THEN pattern belongs to the target class |
- from the hyper-ellipsoid basis functions of the restricted local cluster network (...)
Table 3.5 below gives an outline of the RULEX algorithm (...)
- from the hyper-ellipsoid basis functions of the restricted local cluster network (...)
rulex() { | |||||
create_data_structures(); | |||||
create_domain_description(); | |||||
for each local cluster | |||||
for each ridge function | |||||
calculate_ridge_limits(); | |||||
while redundancies remain | |||||
remove_redundant_rules(); | |||||
remove_redundant_antecedents(); | |||||
merge_antecedents(); | |||||
endwhile; | |||||
feed_forward_test_set(); | |||||
display_rule_set(); | |||||
} end rulex | |||||
.
1996
- (Andrews & Geva, 1996) ⇒ Robert Andrews, and Shlomo Geva (1996). "Rules and Local Function Networks". In: Proceedings of the Workshop on Rule Extraction From Trained Artificial Neural Networks (AISB96).
- QUOTE: RULEX is suitable for both continuous data and discrete data. RULEX also has facilities for reducing the size of the extracted rule set to a minimum number of propositional rules. This is achieved by removing redundant antecedent conditions, use of negations in antecedents, and by removing redundant rules.
RULEX and the RBP network can also be used for rule refinement. In turning a propositional if-then rule into the parameters that define a local response unit it is necessary to determine from the rule the active range of each ridge in the unit to be configured.
- QUOTE: RULEX is suitable for both continuous data and discrete data. RULEX also has facilities for reducing the size of the extracted rule set to a minimum number of propositional rules. This is achieved by removing redundant antecedent conditions, use of negations in antecedents, and by removing redundant rules.
1995
- (Andrews & Geva, 1995) ⇒ Robert Andrews, and Shlomo Geva (1995). "RULEX & CEBP Networks As the Basis for a Rule Refinement System". Hybrid problems, hybrid solutions, 27, 1.
- QUOTE: We Also describe RULEX, an automated procedure for extracting accurate symbolic rules from the local basins of attraction produced by CEBP networks. RULEX extracts propositional if-then rules by direct interpretation of the parameters which describe the CEBP local functions thus making it very computationally efficient. We also describe how RULEX can be used to preconfigure the CEBP network to encapsulate existing domain knowledge. This ability to encode existing knowledge into the network, train and then extract accurate rules makes the CEBP network and RULEX the basis for a good knowledge refinement system. Further, the degree of accuracy and computational efficiency of the knowledge insertion, training, and rule extraction process gives this method significant advantages over existing ANN rule refinement techniques.
- ↑ Ross Hayward. Alan Tickle, and Joachim Diederich (1996). “Extracting rules for grammar recognition from Cascade-2 networks". In: Wermter S., Riloff E., Scheler G. (eds) Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing. IJCAI 1995. DOI:10.1007/3-540-60925-3_37
- ↑ R. Hayward, C. Ho-Stuart, and J. Diederich (1997). “Neural networks as oracles for rule extraction". In: Connectionist System for Knowledge Representation and Deduction
- ↑ Robert Andrews, and Shlomo Geva (1994). “Rule extraction from a constrained error back propagation MLP". In: Proc. of 5th Australian Conference on Neural Networks.