RLScore System
(Redirected from RLScore)
Jump to navigation
Jump to search
An RLScore System is a regularized least-squares function fitting system.
- Example(s):
- Version 0.8 (2017.08.17)
- Version 0.7 (2016.09.19)
- Version 0.6 (2016.02.18)
- Version 0.5 (2012.06.19)
- Version 0.4 (2010.04.14)
- Version 0.3 (2009.12.03)
- Version 0.2 (2009.03.13).
- …
- Counter-Example(s):
- See: Maximum Margin Clustering.
References
2017
- https://github.com/aatapa/RLScore
- QUOTE: RLScore is a machine learning software package for regularized kernel methods, focusing especially on Regularized Least-Squares (RLS) based methods. The main advantage of the RLS family of methods is that they admit a closed form solution, expressed as a system of linear equations. This allows deriving highly efficient algorithms for RLS methods, based on matrix algebraic optimization. Classical results include computational short-cuts for multi-target learning, fast regularization path and leave-one-out cross-validation. RLScore takes these results further by implementing a wide variety of additional computational shortcuts for different types of cross-validation strategies, single- and multi-target feature selection, multi-task and zero-shot learning with Kronecker kernels, ranking, stochastic hill climbing based clustering etc. The majority of the implemented methods are such that are not available in any other software package.
2012
2012b
- https://github.com/aatapa/RLScore#overview
- RLScore is a Regularized Least-Squares (RLS) based algorithm package. It contains implementations of the RLS and RankRLS learners allowing the optimization of performance measures for the tasks of regression, ranking and classification. In addition, the package contains linear time greedy forward feature selection with leave-one-out criterion for RLS (greedy RLS). Finally, the package contains an implementation of a maximum margin clustering method based on RLS and stochastic hill climbing. Implementations of efficient cross-validation algorithms are integrated to the package, combined together with functionality for fast parallel learning of multiple outputs.
Reduced set approximation for large-scale learning with kernels is included. In this setting approximation is introduced also to the cross-validation methods. For learning linear models from large but sparse data sets, RLS and RankRLS can be trained using conjugate gradient optimization techniques.
- RLScore is a Regularized Least-Squares (RLS) based algorithm package. It contains implementations of the RLS and RankRLS learners allowing the optimization of performance measures for the tasks of regression, ranking and classification. In addition, the package contains linear time greedy forward feature selection with leave-one-out criterion for RLS (greedy RLS). Finally, the package contains an implementation of a maximum margin clustering method based on RLS and stochastic hill climbing. Implementations of efficient cross-validation algorithms are integrated to the package, combined together with functionality for fast parallel learning of multiple outputs.