Co-Training Learning Algorithm
(Redirected from co-training setting)
Jump to navigation
Jump to search
A Co-Training Learning Algorithm is a semi-supervised learning algorithm that can be applied to semi-supervised learning tasks where the learning dataset has a natural separation of predictor features into two disjoint sets.
- AKA: Semi-Supervised Cotraining.
- …
- Example(s):
- Counter-Example(s):
- See: Weakly-Supervised Algorithm, EM Algorithm.
References
2009
- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Co-training
- Co-training is a machine learning algorithm used when there are only small amounts of labeled data and large amounts of unlabeled data. One of its uses is in text mining for search engines. It was introduced by Avrim Blum and Tom M. Mitchell in 1998.
2004
- (Mihalcea, 2004) ⇒ Rada Mihalcea. (2004). “Co-training and Self-training for Word Sense Disambiguation.” In: Proceedings of NAACL Conference (NAACL 2004).
2003
- (Strehl & Ghosh, 2003) ⇒ Alexander Strehl, and Joydeep Ghosh. (2003). “Cluster Ensembles -- A knowledge reuse framework for combining multiple partitions.” In: The Journal of Machine Learning Research, (3). doi:10.1162/153244303321897735
2000
- (Nigam & Ghani, 2000) ⇒ Kamal Nigam, and Rayid Ghani. (2000). “Analyzing the Effectiveness and Applicability of Co-training.” In: Proceedings of the ninth International Conference on Information and knowledge management (CIKM 2000). doi:10.1145/354756.354805
- QUOTE: The co-training setting 1 applies to datasets that have a natural separation of their features into two disjoint sets
1998
- (Blum & Mitchell, 1998) ⇒ Avrim Blum, and Tom M. Mitchell. (1998). “Combining Labeled and Unlabeled Data with Co-training.” In: Proceedings of COLT 1998 Conference. doi:10.1145/279943.279962.