Reproducing Kernel Hilbert Space
A Reproducing Kernel Hilbert Space is a Hilbert space of function spaces (in which pointwise evaluation is a bounded operator).
- AKA: RKHS.
- See: Regularized Supervised Classification Algorithm, Tikhonov Regularization Algorithm, Continuous Kernel, Maximum Mean Discrepancy.
References
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Reproducing_kernel_Hilbert_space Retrieved:2014-8-3.
- In functional analysis (a branch of mathematics), a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which pointwise evaluation is a continuous linear functional. Equivalently, they are spaces that can be defined by reproducing kernels. The subject was originally and simultaneously developed by Nachman Aronszajn (1907–1980) and Stefan Bergman (1895–1977) in 1950.
In this article we assume that Hilbert spaces are complex. The main reason for this is that many of the examples of reproducing kernel Hilbert spaces are spaces of analytic functions, although some real Hilbert spaces also have reproducing kernels. A key motivation for reproducing kernel hilbert spaces in machine learning is the Representer theorem which says that any function in an RKHS that classifies a set of sample points can be defined as a linear combination of the canonical feature maps of those points.
An important subset of the reproducing kernel Hilbert spaces are the reproducing kernel Hilbert spaces associated to a continuous kernel. These spaces have wide applications, including complex analysis, harmonic analysis, quantum mechanics, statistics and machine learning.
- In functional analysis (a branch of mathematics), a reproducing kernel Hilbert space (RKHS) is a Hilbert space of functions in which pointwise evaluation is a continuous linear functional. Equivalently, they are spaces that can be defined by reproducing kernels. The subject was originally and simultaneously developed by Nachman Aronszajn (1907–1980) and Stefan Bergman (1895–1977) in 1950.
2009
- (Chen et al., 2009) ⇒ Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. (2009). “Extracting Discrimininative Concepts for Domain Adaptation in Text Mining.” In: Proceedings of ACM SIGKDD Conference (KDD-2009). doi:10.1145/1557019.1557045
- Recently, Gretton et al. [5] introduced the Maximum Mean Discrepancy (MMD) for comparing distributions based on the Reproducing Kernel Hilbert Space (RKHS) distance. ... Therefore, the distance between two distributions of two samples is simply the distance between the two mean elements in the RKHS.
2004
- (Rifkin & Klatau, 2004) ⇒ Ryan Rifkin, and Aldebaro Klautau. (2004). “In Defense of One-Vs-All Classification.” In: The Journal of Machine Learning Research, 5.