Self-Supervised Learning Task
A self-supervised learning task is a semi-supervised learning task that requires a labeling heuristic (to label some unlabeled examples).
- Context:
- Input: a Labeling Heuristic.
- It can be solved by a Self-Supervised Learning System (that implements a Self-Supervised Learning algorithm).
- ...
- Example(s):
- Counter-Example(s):
- See: Semi-Supervised Learning Algorithm, Self-Supervised Learning Algorithm, ResNet SSL.
References
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Self-supervised_learning Retrieved:2023-10-16.
- Self-supervised learning (SSL) is a paradigm in machine learning for processing data of lower quality, rather than improving ultimate outcomes. Self-supervised learning more closely imitates the way humans learn to classify objects.
The typical SSL method is based on an artificial neural network or other model such as a decision list. The model learns in two steps. First, the task is solved based on an auxiliary or pretext classification task using pseudo-labels which help to initialize the model parameters. Second, the actual task is performed with supervised or unsupervised learning. Other auxiliary tasks involve pattern completion from masked input patterns (silent pauses in speech or image portions masked in black). Self-supervised learning has produced promising results in recent years and has found practical application in audio processing and is being used by Facebook and others for speech recognition.
- Self-supervised learning (SSL) is a paradigm in machine learning for processing data of lower quality, rather than improving ultimate outcomes. Self-supervised learning more closely imitates the way humans learn to classify objects.
2008
- (Banko & Etzioni, 2008) ⇒ Michele Banko, and Oren Etzioni. (2008). “The Tradeoffs Between Open and Traditional Relation Extraction.” In: Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL 2008).
- As with O-NB, O-CRF’s training process is self-supervised. O-CRF applies a handful of relation-independent heuristics to the PennTreebank and obtains a set of labeled examples in the form of relational tuples. The heuristics were designed to capture dependencies typically obtained via syntactic parsing and semantic role labelling.
2001
- (Wu et al., 2001) ⇒ Y. Wu, T. S. Huang, and K. Toyama. (2001). “Self-Supervised Learning for Object Recognition Based on Kernel Discriminant-EM Algorithm.” In: Proceedings of the IEEE International Conference on Computer Vision.
1995
- (Dayan et al., 1995) ⇒ P. Dayan, Geoffrey E. Hinton, R. M. Neal, and R. S. Zemel. (1995). “The Helmholtz Machine.” In: Neural Computation, 7(5).