Labeled Training Dataset
(Redirected from labeled training set)
Jump to navigation
Jump to search
A Labeled Training Dataset is a structured data base composed of predictor data columns and a training column (a labeled dataset).
- Context:
- It can (typically) be a Supervised Learning Task Input.
- It can (typically) be in the role of being a Training Dataset.
- It can range from being a Binary Labeled Training Record Set to being a Class Labeled Training Record Set to being a Rank Labeled Training Record Set to being a Number Labeled Training Record Set.
- It can range from being a Fully-Labeled Training Record Set to being a Semi-Labeled Training Record Set.
- It can range from being a Small Labeled Training Dataset to being a Small Labeled Training Dataset to being a Small Labeled Training Dataset.
- It can range from being a Manually-Labeled Training Dataset to being a Heuristically-Labeled Training Dataset.
- Example(s):
- … a UCI Dataset during a training phase.
- …
- Counter-Example(s):
- See: Fully-Supervised Learning Task, .
References
2011
- http://en.wikipedia.org/wiki/Training_set
- A training set is a set of data used in various areas of information science to discover potentially predictive relationships. Training sets are used in artificial intelligence, machine learning, genetic programming, intelligent systems, and statistics. In all these fields, a training set has much the same role and is often used in conjunction with a test set.
2009
- (Sun & Wu, 2009) ⇒ Yijun Sun, and Dapeng Wu. (2009). “Feature Extraction Through Local Learning.” In: Statistical Analysis and Data Mining, 2(1). doi:10.1002/sam.10028
- Suppose we are given a training dataset D={('xn, y'n)}Nn=1∈X×Y, where [math]\displaystyle{ X }[/math] ∈ R'I is the pattern space, $I$ is the feature dimensionality, and [math]\displaystyle{ Y }[/math] is the label space.
2004
- (Hastie et al., 2004) ⇒ Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. (2004). “The Entire Regularization Path for the Support Vector Machine.” In: The Journal of Machine Learning Research, 5.
- In this paper we study the support vector machine (SVM)(Vapnik, 1996; Schölkopf and Smola, 2001) for two-class classification. We have a set of [math]\displaystyle{ n }[/math] training pairs xi, yi, where xi in
R
p is a p-vector of real-valued predictors (attributes) for the ith observation, and yi in {−1,+1} codes its binary response. …
- In this paper we study the support vector machine (SVM)(Vapnik, 1996; Schölkopf and Smola, 2001) for two-class classification. We have a set of [math]\displaystyle{ n }[/math] training pairs xi, yi, where xi in
2000
- (Nigam et al., 2000) ⇒ Kamal Nigam, Andrew McCallum, Sebastian Thrun, and Tom M. Mitchell. (2000). “Text Classification from Labeled and Unlabeled Documents Using EM.” In: Machine Learning, 39(2/3). doi:10.1023/A:1007692713085
- Existing statistical text learning algorithms can be trained to approximately classify documents, given a sufficient set of labeled training examples. … One key difficulty with these current algorithms … is that they require a large … number of labeled training examples to learn accurately.
- … estimating the parameters of the generative model by using a set of 'labeled training data, [math]\displaystyle{ D }[/math] = {d1 … [math]\displaystyle{ d }[/math]|D|}.
- … is selected by optimizing accuracy as measured by leave-one-out cross-validation on the labeled training set.
1996a
- (Breiman, 1996) ⇒ Leo Breiman. (1996). “Bagging Predictors.” In: Machine Learning, 24(2). doi:10.1023/A:1018054314350
- QUOTE: A learning set of [math]\displaystyle{ L }[/math] consists of data [math]\displaystyle{ {(y_n, \mathbf{x}_n), n = 1,…,N} }[/math] where the [math]\displaystyle{ y }[/math]’s are either class labels or a numerical response. We have procedure for using this learning set to form a predictor [math]\displaystyle{ \varphi (\mathbf{x}, L) }[/math] - if the input is [math]\displaystyle{ \mathbf{x} }[/math] we predict [math]\displaystyle{ y }[/math] by [math]\displaystyle{ \varphi (\mathbf{x}, L) }[/math]. Now, suppose we are given a sequence of learning sets [math]\displaystyle{ {L_k} }[/math] each consisting of [math]\displaystyle{ N }[/math] independent observations from the same underlying distribution as [math]\displaystyle{ L }[/math].
1996b
- (Domingos, 1996) ⇒ Pedro Domingos. (1996). “Unifying Instance-based and Rule-based Induction.” In: Machine Learning, 24(2). doi:10.1023/A:1018006431188
- QUOTE: Inductive learning is the explicit or implicit creation of general concept or class descriptions from examples. Many induction problems can be described as follows. A training set of preclassified examples is given, where each example (also called observation or case) is described by a vector of features or attribute values, and the goal is to form a description that can be used to classify previously unseen examples with high accuracy.