Statistical Inference Theory
(Redirected from statistical learning theory)
Jump to navigation
Jump to search
A Statistical Inference Theory is a Statistical Theory that attempts to create meta-models for Stochastic Processes.
- AKA: Statistical Decision Theory.
- Context:
- It can define a Statistical Model.
- It can define a Statistical Learning Algorithm/Statistical Modeling Algorithm.
- It can be the focus of a Statistical Learning Theory Discipline.
- It can assume that the Probability Distribution.
- It can assume that the Training Data is an IID Sample.
- Example(s):
- Counter-Example(s):
- See: Statistical Learning Framework, Supervised Learning, Unsupervised Learning, Online Learning, Reinforcement Learning, Dimensionality Reduction, Artificial Neural Network, Clustering Algorithm, Structured Prediction, Anomaly Detection, Rademacher Complexity.
References
2020
- (Wikipedia, 2020) ⇒ https://en.wikipedia.org/wiki/Statistical_learning_theory Retrieved:2020-2-1.
- Statistical learning theory is a framework for machine learning
drawing from the fields of statistics and functional analysis. [1] Statistical learning theory deals with the problem of finding a predictive function based on data. Statistical learning theory has led to successful applications in fields such as computer vision, speech recognition, and bioinformatics.
- Statistical learning theory is a framework for machine learning
2014
- (Wikipedia, 2011) ⇒ http://en.wikipedia.org/wiki/Statistical_inference
- In statistics, statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation.[2] Initial requirements of such a system of procedures for inference and induction are that the system should produce reasonable answers when applied to well-defined situations and that it should be general enough to be applied across a range of situations. Inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a larger population than the sample represents.
The outcome of statistical inference may be an answer to the question "what should be done next?", where this might be a decision about making further experiments or surveys, or about drawing a conclusion before implementing some organizational or governmental policy.
- In statistics, statistical inference is the process of drawing conclusions from data that are subject to random variation, for example, observational errors or sampling variation.[2] Initial requirements of such a system of procedures for inference and induction are that the system should produce reasonable answers when applied to well-defined situations and that it should be general enough to be applied across a range of situations. Inferential statistics are used to test hypotheses and make estimations using sample data. Whereas descriptive statistics describe a sample, inferential statistics infer predictions about a larger population than the sample represents.
- ↑ Trevor Hastie, Robert Tibshirani, Jerome Friedman (2009) The Elements of Statistical Learning, Springer-Verlag .
- ↑ Upton, G., Cook, I. (2008) Oxford Dictionary of Statistics, OUP. ISBN 978-0-19-954145-4
2011
- http://en.wikipedia.org/wiki/Decision_theory#Probability_theory
- QUOTE: The Advocates of probability theory point to: ... the complete class theorems, which show that all admissible decision rules are equivalent to the Bayesian decision rule for some utility function and some prior distribution (or for the limit of a sequence of prior distributions). Thus, for every decision rule, either the rule may be reformulated as a Bayesian procedure, or there is a (perhaps limiting) Bayesian rule that is sometimes better and never worse.
2009
- (Hastie et al., 2009) ⇒ Trevor Hastie, Robert Tibshirani, and Jerome H. Friedman. (2009). “The Elements of Statistical Learning: Data Mining, Inference, and Prediction; 2nd edition.” Springer-Verlag. ISBN:0387848576
- (Wikipedia, 2009) ⇒ http://en.wikipedia.org/wiki/Decision_theory#Statistical_decision_theory
2008
- (Vapnik, 2008) ⇒ Vladimir N. Vapnik. (2008). “[http://www.learningtheory.org/index.php?view=article&id=9 COLT interview - Vladimir Vapnik." http://www.learningtheory.org/
- My current research interest is to develop advanced models of empirical inference. I think that the problem of machine learning is not just a technical problem. It is a general problem of philosophy of empirical inference. One of the ways for inference is induction. The main philosophy of inference developed in the past strongly connected the empirical inference to the inductive learning process. I believe that induction is a rather restrictive model of learning and I am trying to develop more advanced models. First, I am trying to develop non-inductive methods of inference, such as transductive inference, selective inference, and many other options. Second, I am trying to introduce non-classical ways of inference.
2000
- (Vapnik, 2000) ⇒ Vladimir N. Vapnik. (2000). “The Nature of Statistical Learning Theory (2nd Edition).” Springer. ISBN:0387987800
1998
- (Vapnik, 1998) ⇒ Vladimir N. Vapnik. (1998). “Statistical Learning Theory." John Wiley. ISBN:0471030031
1995
- (Vapnik, 1995) ⇒ Vladimir N. Vapnik. (1995). “The Nature of Statistical Learning Theory.” Springer. ISBN:0387945598.
1970
- (DeGroot, 1970) ⇒ Morris H. DeGroot. (1970). “Optimal Statistical Decisions." Mcgraw-Hill. ISBN:047168029X
1955
- (Fisher, 1955) ⇒ Ronald Fisher. (1955). “Statistical Methods and Scientific Induction.” In: Journal of the Royal Statistical Society. Series B, 17(1). http://www.jstor.org/stable/2983785