2010 LearningtoCombineDiscriminative
- (Lee, 2010) ⇒ Chi-Hoon Lee. (2010). “to Combine Discriminative Classifiers: Confidence based.” In: Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2010). doi:10.1145/1835804.1835899
Subject Headings: Ensemble Learning.
Notes
- Categories and Subject Descriptors: I.5.2 Pattern Recognition: Design Methodology - Classifier design and evaluation; I.2 Artificial Intelligence: Learning.
- General Terms: Algorithms, Performance
Cited By
- http://scholar.google.com/scholar?q=%22Learning+to+combine+discriminative+classifiers%3A+confidence+based%22+2010
- http://portal.acm.org/citation.cfm?id=1835899&preflayout=flat#citedby
Quotes
Author Keywords
Classification, Ensemble, Discriminative classifier, Logistic Regression
Abstract
Much of research in data mining and machine learning has led to numerous practical applications. Spam filtering, fraud detection, and user query-intent analysis has relied heavily on machine learned classifiers, and resulted in improvements in robust classification accuracy. Combining multiple classifiers (a.k.a. Ensemble Learning) is a well studied and has been known to improve effectiveness of a classifier. To address two key challenges in Ensemble Learning -- (1) learning weights of individual classifiers and (2) the combination rule of their weighted responses, this paper proposes a novel Ensemble classifier, EnLR, that computes weights of responses from discriminative classifiers and combines their weighted responses to produce a single response for a test instance. The combination rule is based on aggregating weighted responses, where a weight of an individual classifier is inversely based on their respective variances around their responses. Here, variance quantifies the uncertainty of the discriminative classifiers' parameters, which in turn depends on the training samples. As opposed to other ensemble methods where the weight of each individual classifier is learned as a part of parameter learning and thus the same weight is applied to all testing instances, our model is actively adjusted as individual classifiers become confident at its decision for a test instance. Our empirical experiments on various data sets demonstrate that our combined classifier produces “effective” results when compared with a single classifier. Our novel classifier shows statistically significant better accuracy when compared to well known Ensemble methods -- Bagging and AdaBoost. In addition to robust accuracy, our model is extremely efficient dealing with high volumes of training samples due to the independent learning paradigm among its multiple classifiers. It is simple to implement in a distributed computing environment such as Hadoop.
References
,
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2010 LearningtoCombineDiscriminative | Chi-Hoon Lee | Learning to Combine Discriminative Classifiers: Confidence based | KDD-2010 Proceedings | 10.1145/1835804.1835899 | 2010 |