RIPPER Algorithm
A RIPPER Algorithm is a Rule Induction Algorithm that ...
- AKA: Repeated Incremental Pruning to Produce Error Reduction.
- …
- Counter-Example(s):
- See: Rule Learning Algorithm.
References
2012
- http://en.wikibooks.org/wiki/Data_Mining_Algorithms_In_R/Classification/JRip#Synopsis
- QUOTE: This class implements a propositional rule learner, Repeated Incremental Pruning to Produce Error Reduction (RIPPER), which was proposed by William W. Cohen as an optimized version of IREP. It is based in association rules with reduced error pruning (REP), a very common and effective technique found in decision tree algorithms.
In REP for rules algorithms, the training data is split into a growing set and a pruning set. First, an initial rule set is formed that over ts the growing set, using some heuristic method. This overlarge rule set is then repeatedly simplified by applying one of a set of pruning operators typical pruning operators would be to delete any single condition or any single rule. At each stage of simplification, the pruning operator chosen is the one that yields the greatest reduction of error on the pruning set. Simplification ends when applying any pruning operator would increase error on the pruning set.
- QUOTE: This class implements a propositional rule learner, Repeated Incremental Pruning to Produce Error Reduction (RIPPER), which was proposed by William W. Cohen as an optimized version of IREP. It is based in association rules with reduced error pruning (REP), a very common and effective technique found in decision tree algorithms.
2001
- http://www.fsl.cs.sunysb.edu/docs/binaryeval/node5.html#SECTION00052000000000000000
- QUOTE: The next algorithm we used, RIPPER [3], is an inductive rule learner. This algorithm generated a detection model composed of resource rules that was built to detect future examples of malicious executables. This algorithm used libBFD information as features.
RIPPER is a rule-based learner that builds a set of rules that identify the classes while minimizing the amount of error. The error is defined by the number of training examples misclassified by the rules.
- QUOTE: The next algorithm we used, RIPPER [3], is an inductive rule learner. This algorithm generated a detection model composed of resource rules that was built to detect future examples of malicious executables. This algorithm used libBFD information as features.
1996
- (Cohen, 1996) ⇒ William W. Cohen. (1996). “Learning Trees and Rules with Set-Valued Features.” In: Proceedings of the thirteenth national conference on Artificial intelligence (AAAI 1996).