Lazy Learning Algorithm
(Redirected from lazy learning algorithm)
Jump to navigation
Jump to search
A Lazy Learning Algorithm is a learning algorithm that can be applied by a lazy learning system (to solve a lazy learning task).
- AKA: Lazy Learner.
- Context:
- It lazily postpones any work until receiving Testing Records and only performs the work necessary to Predict its Target Value.
- It can range from being a (typically) Supervised Lazy Learning Algorithm to being an Unsupervised Lazy Learning Algorithm.
- It can range from being a Lazy Model-based Learning Algorithm to being a Lazy Instance-based Learning Algorithm.
- It can range from being a Lazy Classification Algorithm to being a Lazy Regression Algorithm/Lazy Estimation Algorithm.
- Example(s):
- Counter-Example(s):
- an Eager Learning Algorithm, such as a decision tree learning algorithm, or a neural network learning algorithm.
- See: Partial Function; Locally Weighted Regression for Control; Online Learning.
References
2011
- (Webb, 2011d) ⇒ Geoffrey I. Webb. (2011). “Lazy Learning.” In: (Sammut & Webb, 2011) p.
1997
- (Mitchell, 1997) ⇒ Tom M. Mitchell. (1997). “Machine Learning." McGraw-Hill. . ISBN:0070428077
- QUOTE: Section 8.6 Remarks on Lazy and Eager Learning: In this chapter we considered three lazy learning methods: the k-Nearest Neighbor algorithm, locally weighted regression, and case-based reasoning. We call these methods lazy because they defer the decision of how to generalize beyond the training data until each new query instance in encountered. We also discussed an eager learning method the method for learning radial basis function networks. We call this method eager because it generalize beyond the training data before observe the new query, committing at training time to the network structure and weights that define its approximation to the target function. In this same sense, every other algorithm discussed elsewhere in this book (e.g., Backpropagation, C4.5) is an eager learning algorithm. … Lazy methods may consider the query instance xq when deciding how to generalize beyond the training data D. … Eager methods cannot. By the time they observe the query instance xq they have already chosen their (global) approximation to the target function. … The key point in the above paragraph is that a lazy learning has the option of (implicitly) representing the target function by a combination of many local approximations, whereas an eager learner must commit at training time to a single global approximation. The distinction between eager and lazy learning is thus related to the distinction between global and local approximations to the target function.
1999
- (Melli, 1999a) ⇒ Gabor Melli. (1999). “A Lazy Model-based Algorithm for On-Line Classification.” In: Proceedings of PKDD-1999.
1998
- (Melli, 1998) ⇒ Gabor Melli. (1998). “Lazy Model-based Approach to On-Line Classification]." Master's Thesis, Simon Fraser University.
1991
- (Aha et al., 1991) ⇒ D. W. Aha, D. Kibler, and M. K. Albert. (1991). “Instance-based learning algorithms." Machine Learning, 6(1).