Supervised Learning Task
(Redirected from supervised learning step)
Jump to navigation
Jump to search
A supervised learning task is a learning task with a labeled training set.
- AKA: Learning from Labeled Data.
- Context:
- Input: one or more target attributes and a Labeled Training Record Set.
- optional: a Testing Record Set; an Unlabeled Training Record Set; or a Learning Metamodel (such as a Decision Tree Model or a Statistical Metamodel).
- optional: a Metamodel, such as a Decision Tree Metamodel or a Regression Metamodel.
- output: one (or more) Predicted Value(s) for each Testing Record's target class.
- evaluation: Supervised Learning Evaluation.
- It can (typically) followed by a Predictive Task.
- It can (typically) be instantiated as a Supervised Learning Act.
- It can range from being a Human-based Supervised Learning Task to being an Supervised Machine Learning Task.
- It can range from being a Single-Value Supervised Learning Task to a Multi-Value Supervised Learning Task.
- It can range from being a Fully-Supervised Learning Task to being a Semi-Supervised Learning Task, depending on the availability of unlabeled training records.
- It can range from being a Supervised Classification Task to being a Supervised Ordinal Learning Task to being a Supervised Estimation Task, depending on the domain of the target variable.
- It can range from being an Model-based Supervised Learning Task to being an Model-Free Supervised Learning Task, depending on the availability of a metamodel.
- It can range from being a Passive Learning Task to being an Active Learning Task.
- It can be an approach to a Predictive Analytics Task.
- It can be solved by a Supervised Learning System (that implements a supervised learning algorithm).
- It can be affected by Background Knowledge, e.g. fundamental factors that underlie a complex domain. (see: Netflix Prize Winner).
- It can range from being a Minimally Supervised Learning Task to being a Heavily Supervised Learning Task.
- ...
- Input: one or more target attributes and a Labeled Training Record Set.
- Example(s):
- a Multiple-Instance Learning Task.
- a Supervised Classification Task.
- a Supervised Regression Task.
- a Regression Tree Learning Task.
- a Concept Learning Task.
- a Supervised Learning System Evaluation Task, such as regression system evaluation.
- an Alphabet Learning Task.
- a Model-based Supervised Machine Learning Task.
- …
- Counter-Example(s):
- See: Meta-Learning Task, Multistrategy Learning Task, Deep Learning Task, Dynamic Programming, Educational Data Mining, Numeric Prediction Task, Heuristic Classifier, Active Learning Theory, Adaptive Resonance Theory, Generative Learning Task, Discriminative Learning Task, Divide-and-Conquer Learning Task, Rule Learning Task.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/supervised_learning Retrieved:2017-8-8.
- Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias).
The parallel task in human and animal psychology is often referred to as concept learning.
- Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias).
2012
- (Liu & Wu, 2012) ⇒ Qiong Liu, and Ying Wu (2012). "Supervised Learning". In: "Encyclopedia of the Sciences of Learning" (Editors: Prof. Dr. Norbert M. Seel) pp 3243-3245
- QUOTE: Supervised Learning is a machine learning paradigm for acquiring the input-output relationship information of a system based on a given set of paired input-output training samples. As the output is regarded as the label of the input data or the supervision, an input-output training sample is also called labeled training data, or supervised data. Occasionally, it is also referred to as Learning with a Teacher (Haykin 1998), Learning from Labeled Data, or Inductive Machine Learning (Kotsiantis 2007). The goal of supervised learning is to build an artificial system that can learn the mapping between the input and the output, and can predict the output of the system given new inputs. If the output takes a finite set of discrete values that indicate the class labels of the input, (...)
2011A
- (Sammut & Webb, 2011) ⇒ Claude Sammut (editor), and Geoffrey I. Webb (editor). (2011). "Supervised Learning". In: "Encyclopedia of Machine Learning" (Sammut & Webb, 2011, pp.1213-1214). Springer US
- QUOTE: Supervised learning refers to any machine learning process that learns a function from an input type to an output type using data comprising examples that have both input and output values. Two typical examples of supervised learning are classification learning and regression. In these cases, the output types are respectively categorical (the classes) and numeric. Supervised learning stands in contrast to unsupervised learning, which seeks to learn structure in data, and to reinforcement learning in which sequential decision-making policies are learned from reward with no examples of “correct” behavior.
2011B
- (Sebag, 2011) ⇒ Michele Sebag, M. (2011). "Nonstandard Criteria in Evolutionary Learning". In Encyclopedia of Machine Learning (Sammut & Webb, 2011, pp. 722-731). Springer US.
- QUOTE: Machine learning (ML), primarily concerned with extracting models or hypotheses from data, comes into three main flavors: supervised learning also known as classification or regression (Bishop 2006; Duda et al. 2001; Han and Kamber 2000), unsupervised learning also known as clustering (Ben-David et al. 2005), and reinforcement learning(Sutton and Barto 1998).
2009
- (ClopiNet, 2009) ⇒ http://clopinet.com/isabelle/Projects/ETH/Exam_Questions.html
- Supervised learning refers to learning in the presence of a teacher. When trying to learn to classify objects, the teaching signal is the class label. In this class, data objects are represented as vectors "x" of variables or “features". We seek to predict an attribute "y" of these data objects, that is another variable. A continuous variable is a real number. Both categorical and ordinal variables take values from a finite set of choices. For categorical inputs the list is not ordered (e.g. the country of origin) while for ordinal inputs it is ordered (e.g. three clinical stages in the advancement of a disease.) Regardless of the type of inputs, if the output is continuous, the problem is a regression problem; if the output is categorical, the problem is a classification problem. “Ordinal regression” problems have ordinal outputs.
2000a
- (Valpola, 2000) ⇒ Harri Valpola. (2000). “Bayesian Ensemble Learning for Nonlinear Factor Analysis." PhD Dissertation, Helsinki University of Technology.
- QUOTE: supervised learning: Aims at building a model which can mimic the responses of a “teacher who provides two sets of observations: inputs and the corresponding desired outputs.
2000b
- (Witten & Frank, 2000) ⇒ Ian H. Witten, and Eibe Frank. (2000). “Data Mining: Practical Machine Learning Tools and Techniques with Java implementations." Morgan Kaufmann.