Probably Approximately Correct Learning
(Redirected from PAC Learning)
Jump to navigation
Jump to search
See: Machine Learning Theory; PAC Learning; Statistical Machine Learning; Stochastic Finite Learning; VC Dimension, Computational Complexity of Learning.
References
2011
- (Zeugmann, 2011c) ⇒ Thomas Zeugmann. (2011). “PAC Learning.” In: (Sammut & Webb, 2011) p.745
2009
- http://en.wikipedia.org/wiki/Probably_approximately_correct_learning
- In computational learning theory, probably approximately correct learning (PAC learning) is a framework for mathematical analysis of machine learning. It was proposed in 1984 by Leslie Valiant.
- In this framework, the learner receives samples and must select a generalization function (called the hypothesis) from a certain class of possible functions. The goal is that, with high probability (the "probably" part), the selected function will have low generalization error (the "approximately correct" part). The learner must be able to learn the concept given any arbitrary approximation ratio, probability of success, or distribution of the samples.
- The model was later extended to treat noise (misclassified samples).
- An important innovation of the PAC framework is the introduction of computational complexity theory concepts to machine learning. In particular, the learner is expected to find efficient functions (time and space requirements bounded to a polynomial of the example size), and the learner itself must implement an efficient procedure (requiring an example count bounded to a polynomial of the concept size, modified by the approximation and likelihood bounds).
1984
- (Valiant, 1984) ⇒ Leslie Valiant. (1984). “A Theory of the Learnable.” In: Communications of the ACM, 27