2012 SPFGMKLGeneralizedMultipleKerne
- (Jain et al., 2012) ⇒ Ashesh Jain, S.V.N. Vishwanathan, and Manik Varma. (2012). “SPF-GMKL: Generalized Multiple Kernel Learning with a Million Kernels.” In: Proceedings of the 18th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2012). ISBN:978-1-4503-1462-6 doi:10.1145/2339530.2339648
Subject Headings:
Notes
Cited By
- http://scholar.google.com/scholar?q=%222012%22+SPF-GMKL%3A+Generalized+Multiple+Kernel+Learning+with+a+Million+Kernels
- http://dl.acm.org/citation.cfm?id=2339530.2339648&preflayout=flat#citedby
Quotes
Author Keywords
Abstract
Multiple Kernel Learning (MKL) aims to learn the kernel in an SVM from training data. Many MKL formulations have been proposed and some have proved effective in certain applications. Nevertheless, as MKL is a nascent field, many more formulations need to be developed to generalize across domains and meet the challenges of real world applications. However, each MKL formulation typically necessitates the development of a specialized optimization algorithm. The lack of an efficient, general purpose optimizer capable of handling a wide range of formulations presents a significant challenge to those looking to take MKL out of the lab and into the real world.
This problem was somewhat alleviated by the development of the Generalized Multiple Kernel Learning (GMKL) formulation which admits fairly general kernel parameterizations and regularizers subject to mild constraints. However, the projected gradient descent GMKL optimizer is inefficient as the computation of the step size and a reasonably accurate objective function value or gradient direction are all expensive. We overcome these limitations by developing a Spectral Projected Gradient (SPG) descent optimizer which: a) takes into account second order information in selecting step sizes; b) employs a non-monotone step size selection criterion requiring fewer function evaluations; c) is robust to gradient noise, and d) can take quick steps when far away from the optimum.
We show that our proposed SPG-GMKL optimizer can be an order of magnitude faster than projected gradient descent on even small and medium sized datasets. In some cases, SPG-GMKL can even outperform state-of-the-art specialized optimization algorithms developed for a single MKL formulation. Furthermore, we demonstrate that SPG-GMKL can scale well beyond gradient descent to large problems involving a million kernels or half a million data points. Our code and implementation are available publically.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2012 SPFGMKLGeneralizedMultipleKerne | S.V.N. Vishwanathan Ashesh Jain Manik Varma | SPF-GMKL: Generalized Multiple Kernel Learning with a Million Kernels | 10.1145/2339530.2339648 | 2012 |