Automated Predictive Modeling (ML) Task
An Automated Predictive Modeling (ML) Task is a supervised learning task that is an automated learning task.
- AKA: Supervised Learning with Labeled Data.
- Context:
- Input: one or more Training dataset
(X,y)
.- optional: a Validation Dataset; Test Dataset or a Learning Metamodel (such as a Decision Tree Model or a Statistical Metamodel).
- optional: a Metamodel, such as a Decision Tree Metamodel or a Regression Metamodel.
- Task Output: one (or more) Predicted Value(s) for each Testing Record's target class.
- Task Requirements:
- A model-class
y=f(x,W)
:f()
uses model parameters;W
to map each input vectorX
into predict the outputy_predicted
.
- A learning rule that will adjust
W
to match the fit theoutput
- A model-class
- It can (typically) be supported by a Predictive Modeling Data Preparation Task.
- It can be solved by a Supervised Learning System that applies a Supervised Learning algorithm).
- It can range from being a Single-Value Supervised Learning Task to a Multi-Value Supervised Learning Task.
- It can range, depending on the availability of unlabeled training records, from being a Fully-Supervised Learning Task to being a Semi-Supervised Learning Task.
- It can range, depending on the domain of the target variable, from being a Supervised Classification Task to being a Supervised Ordinal Estimation Task to being a Supervised Estimation Task.
- It can range, depending on the availability of a metamodel, from being an Model-based Supervised Learning Task to being an Model-free Supervised Learning Task.
- It can range from being a Passive Learning Task to being an Active Learning Task.
- It can be an approach to a Predictive Analytics Task.
- It can range from being a Partially-Automated Supervised Learning Task to being an Fully-Automated Supervised Learning Task.
- It can be affected by Background Knowledge, e.g. fundamental factors that underlie a complex domain. (see: Netflix Prize Winner).
- It can be instantiated as a Supervised Learning Act.
- It can range from being a Minimally Supervised Learning Task to being a Heavily Supervised Learning Task.
- …
- Input: one or more Training dataset
- Example(s):
- a Supervised Machine Learning Classification Task,
- a Supervised Machine Learning Regression Task,
- a Supervised Artificial Neural Network,
- a Supervised Learning Benchmark Tasks from UCI KDD Archive, or KDDCup.
- a Model-constrained Supervised Learning Task, such as a CRF Training Task, an HMM Training Task, a Decision Tree Learning Task.
- a Predictive Behavioral Targeting Task.
- an Automated Predictive Modeling Task.
- Counter-Example(s):
- See: Predictive Modeling, Cross-Validation Task, Deep Learning Task, Reinforcement Learning.
References
2018
- (GeeksforGeeks, 2018) ⇒ https://www.geeksforgeeks.org/regression-classification-supervised-machine-learning/ Retrieved:2018-03-25.
- QUOTE: Supervised Machine Learning: The majority of practical machine learning uses supervised learning. Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output Y = f(X) . The goal is to approximate the mapping function so well that when you have new input data (x) that you can predict the output variables (Y) for that data.
Techniques of Supervised Machine Learning algorithms include linear and logistic regression, multi-class classification, Decision Trees and support vector machines. Supervised learning requires that the data used to train the algorithm is already labeled with correct answers. For example, a classification algorithm will learn to identify animals after being trained on a dataset of images that are properly Labeled Datasetlabeled with the species of the animal and some identifying characteristics.
Supervised learning problems can be further grouped into Regression and Classification problems. Both problems have as goal the construction of a succinct model that can predict the value of the dependent attribute from the attribute variables. The difference between the two tasks is the fact that the dependent attribute is numerical for regression and categorical for classification.
- QUOTE: Supervised Machine Learning: The majority of practical machine learning uses supervised learning. Supervised learning is where you have input variables (x) and an output variable (Y) and you use an algorithm to learn the mapping function from the input to the output Y = f(X) . The goal is to approximate the mapping function so well that when you have new input data (x) that you can predict the output variables (Y) for that data.
2013
- (Wikipedia - Supervised Learning, 2013) ⇒ http://en.wikipedia.org/wiki/Supervised_learning
- Supervised learning is the machine learning task of inferring a function from labeled training data.[1] The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias).
The parallel task in human and animal psychology is often referred to as concept learning.
- Supervised learning is the machine learning task of inferring a function from labeled training data.[1] The training data consist of a set of training examples. In supervised learning, each example is a pair consisting of an input object (typically a vector) and a desired output value (also called the supervisory signal). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias).
- ↑ Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258.
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). “Supervised Learning.” In: (Sammut & Webb, 2011) p.941
- QUOTE: Supervised learning refers to any machine learning process that learns a function from an input type to an output type using data comprising examples that have both input and output values. Two typical examples of supervised learning are classification learning and regression. In these cases, the output types are respectively categorical (the classes) and numeric. Supervised learning stands in contrast to unsupervised learning, which seeks to learn structure in data, and to reinforcement learning in which sequential decision-making policies are learned from reward with no examples of “correct” behavior.
2009
- http://fourier.eng.hmc.edu/e161/lectures/classification/node1.html
- QUOTE: Supervised methods assume the availability of some a priori knowledge, and they are carried out in two stages:
- Training (learning): The computer is trained by some training data pairs with input and the corresponding desired output, typically provided by experts in the field.
- Testing: Fed with input data, the trained computer carries out classification or recognition.
- QUOTE: Supervised methods assume the availability of some a priori knowledge, and they are carried out in two stages:
1998
- (Kohavi & Provost, 1998) ⇒ Ron Kohavi, and Foster Provost. (1998). “Glossary of Terms.” In: Machine Leanring 30(2-3).
- QUOTE: Supervised learning: Techniques used to learn the relationship between independent attributes and a designated dependent attribute (the label). Most induction algorithms fall into the supervised learning category.