2016 WhyDoesDeepandCheapLearningWork
- (Lin & Tegmark, 2016) ⇒ Henry W. Lin, and Max Tegmark. (2016). “Why Does Deep and Cheap Learning Work So Well?.” In: arXiv:1608.08225 Journal.
Subject Headings: Deep Learning; Learning Theory
Notes
Cited By
Quotes
Abstract
We show how the success of deep learning depends not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can be approximated through " cheap learning " with exponentially fewer parameters than generic ones, because they have simplifying properties tracing back to the laws of physics. The exceptional simplicity of physics-based functions hinges on properties such as symmetry, locality, compositionality and polynomial log-probability, and we explore how these properties translate into exceptionally simple neural networks approximating both natural phenomena such as images and abstract representations thereof such as drawings. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine-learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to renormalization group procedures. Various " no-flattening theorems " show when these efficient deep networks cannot be accurately approximated by shallow ones without efficiency loss - even for linear networks.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2016 WhyDoesDeepandCheapLearningWork | Max Tegmark Henry W. Lin | Why Does Deep and Cheap Learning Work So Well? |