Locally Weighted Regression
(Redirected from locally weighted regression (LWR))
Jump to navigation
Jump to search
See: Locally Weighted Regression Algorithm, Locally Weighted Regression for Control.
References
2017a
- (Ting et al., 2017) ⇒ Jo-Anne Ting, Franzisk Meier, Sethu Vijayakumar, and Stefan Schaal (2017) "Locally Weighted Regression for Control". In: Sammut & Webb. (2017).
- QUOTE: Locally weighted regression refers to supervised learning of continuous functions (otherwise known as function approximation or regression) by means of of spatially localized algorithms, which are often discussed in the context of kernel regression, nearest neighbor methods, or lazy learning (Atkeson et al. 1997).
1997a
- (Atkeson et al., 1997a) ⇒ Christopher G. Atkeson, Andrew W. Moore, and Stefan Schaal. (1997). "Locally Weighted Learning for Control". In: Aha D.W. (eds) Lazy Learning. Springer, Dordrecht.
- ABSTRACT: Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.
1997b
- (Atkeson et al., 1997b) ⇒ Christopher G. Atkeson, Andrew W. Moore, and Stefan Schaal. (1997). "Locally weighted learning".
- QUOTE: In locally weighted regression (LWR) local models are fit to nearby data. As described later in this section, this can be derived by either weighting the training criterion for the local model (in the general case) or by directly weighting the data (in the case that the local model is linear in the unknown parameters). LWR is derived from standard regression procedures for global models.