Classification Tree Training System
(Redirected from Decision Tree Classification System)
Jump to navigation
Jump to search
A Classification Tree Training System is a decision tree training system that applies a classification tree learning algorithm to solve a classification tree learning task (to produce a classification tree).
- AKA: Decision Tree Classifier.
- Context:
- It can be a component of a Decision Tree Ensemble Learning Classifier.
- Example(s):
- a C4.5 System.
- an rpart System, such as an rpart R Package.
- a jforests System, a Java library.
- SPSS AnswerTree.
- Salford Systems RandomForests. http://www.salford-systems.com/en/products/randomforests
- a Microsoft SQL Server Decision Tree System, http://technet.microsoft.com/en-us/library/cc645868.aspx
- an Excel-based Decision Tree Training System, such as: https://sites.google.com/site/simpledecisiontree/
- A
sklearn.tree.DecisionTreeClassifier
, from Scikit-Learn. - …
- Counter-Example(s):
- See: Kernel-based Classification Algorithm, Supervised Learning Task, Classification Task, Decision Tree, Classification Rule.
References
2017a
- (Scikit-Learn, 2017) & "1.10. Decision Trees" http://scikit-learn.org/stable/modules/tree.html Retrieved on 2017-10-15
- QUOTE: Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
For instance, in the example below, decision trees learn from data to approximate a sine curve with a set of if-then-else decision rules. The deeper the tree, the more complex the decision rules and the fitter the model.
- QUOTE: Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
2017b
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Decision_tree_learning Retrieved:2017-10-15.
- Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees ; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees.
In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data (but the resulting classification tree can be an input for decision making). This page deals with decision trees in data mining.
- Decision tree learning uses a decision tree (as a predictive model) to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modelling approaches used in statistics, data mining and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees ; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees.
2017c
- (Furnkranz, 2017) ⇒ Johannes Fürnkranz, (2017). "Decision Tree". In Encyclopedia of Machine Learning and Data Mining pp 330-335.
- ABSTRACT: The induction of decision trees is one of the oldest and most popular techniques for learning discriminatory models, which has been developed independently in the statistical (Breiman et al. 1984[1]; Kass 1980 [2]) and machine learning (Hunt et al. 1966 [3]; Quinlan 1983[4], 1986[5]) communities. A decision tree is a tree-structured classification model, which is easy to understand, even by non-expert users, and can be efficiently induced from data. An extensive survey of decision-tree learning can be found in Murthy (1998).
2011
- (Loh, 2011) ⇒ Loh, W. Y. (2011). Classification and regression trees. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 1(1), 14-23.
- ABSTRACT: Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples.
2009
- (Dobra, 2009) ⇒ Alin Dobra. (2009). ["Decision Tree Classification"]. In: Encyclopedia of Database Systems pp 765-769
- QUOTE: Decision tree classifiers are decision trees used for classification. As any other classifier, the decision tree classifiers use values of attributes/features of the data to make a class label (discrete) prediction. Structurally, decision tree classifiers are organized like a decision tree in which simple conditions on (usually single) attributes label the edge between an intermediate node and its children. Leaves are labeled by class label predictions. A large number of learning methods have been proposed for decision tree classifiers. Most methods have a tree growing and a pruning phase. The tree growing is recursive and consists in selecting an attribute to split on and actual splitting conditions then recurring on the children until the data corresponding to that path is pure or too small in size. The pruning phase eliminates part of the bottom of the tree that learned noise from the data in order to improve the (...)
- ↑ Breiman L, Friedman JH, Olshen R, Stone C (1984) Classification and regression trees. Wadsworth & Brooks, Pacific Grove
- ↑ Kass GV (1980) An exploratory technique for investigating large quantities of categorical data. Appl Stat 29:119–127
- ↑ Hunt EB, Marin J, Stone PJ (1966) Experiments in induction. Academic, New York
- ↑ Quinlan JR (1983) Learning efficient classification procedures and their application to chess end games. In: Michalski RS, Carbonell JG, Mitchell TM (eds) Machine learning. An artificial intelligence approach, Tioga, Palo Alto, pp 463–482
- ↑ Quinlan JR (1986) Induction of decision trees. Mach Learn 1:81–106