Model Pruning Task
A Model Pruning Task is a model learning task that aims to avoid overfitting and improve performance by automatically reducing the size a decision tree, a neural network or training data.
- Context:
- It can be solved by a Pruning System that implements Pruning Algorithm.
- It can range from being a Pre-Pruning Task to being Post-Pruning Task.
- …
- Example(s):
- Counter-Example(s):
- See: Pruning Set, Decision Tree, Regularization, Rule Learning, Deep Learning Neural Network, Regularization Task.
References
2017
- (Furnkranz, 2017) ⇒ Johannes Furnkranz (2017). "Pruning". In: (Sammut & Webb, 2017). DOI:10.1007/978-1-4899-7687-1_687.
- QUOTE: Pruning describes the idea of avoiding Overfitting by simplifying a learned concept, typically after the actual induction phase.
Method: The term originates from decision tree learning, where the idea of improving the decision tree by cutting some of its branches may be viewed as an analogy to the concept of pruning in gardening.
Commonly, one distinguishes two types of pruning:
Pre-pruning monitors the learning process and prevents further refinements if the current hypothesis becomes too complex.
Post-pruning first learns a possibly overfitting hypothesis and then tries to simplify it in a separate learning phase.
Pruning techniques are particularly important for state-of-the-art Decision Tree and Rule Learning algorithms (see there for more details). The key idea of pruning is essentially the same as Regularization in statistical learning, with the key difference that regularization incorporates a complexity penalty directly into the learning heuristic, whereas pruning uses a separate pruning criterion or pruning algorithm.
- QUOTE: Pruning describes the idea of avoiding Overfitting by simplifying a learned concept, typically after the actual induction phase.