Deep Learning Algorithm
A Deep Learning Algorithm is a neural-network learning algorithm that can be implemented by a deep learning system to solve a deep learning task (that learns higher-level predictor features).
- Context:
- …
- Example(s):
- Counter-Example(s):
- See: Learning Representation, Vector (Mathematics), NLP Algorithm.
References
2018
- Yan LeCun. http://facebook.com/yann.lecun/posts/10155003011462143
- QUOTE: Differentiable Programming is little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.
But the important point is that people are now building a new kind of software by assembling networks of parameterized functional blocks and by training them from examples using some form of gradient-based optimization.
- QUOTE: Differentiable Programming is little more than a rebranding of the modern collection Deep Learning techniques, the same way Deep Learning was a rebranding of the modern incarnations of neural nets with more than two layers.
2016
- http://www.quora.com/What-are-the-likely-AI-advancements-in-the-next-5-to-10-years
- Yann LeCun: There is a number of areas on which people are working hard and making promising advances:
- deep learning combined with reasoning and planning.
- deep model-based reinforcement learning (which involved unsupervised predictive learning)
- recurrent neural nets augmented with differentiable memory modules (e.g. Memory Networks:
- Yann LeCun: There is a number of areas on which people are working hard and making promising advances:
2015
- (Bengio et al., 2015) ⇒ Yoshua Bengio, Ian J. Goodfellow, and Aaron Courville. (2015). “Deep Learning." Draft version
- (Gomes, 2015) ⇒ Lee Gomes. (2015). “Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter.” In: Spectrum, 18 Feb. 2015.
- QUOTE: LeCun - … A lot of us involved in the resurgence of Deep Learning in the mid-2000s, including Geoff Hinton, Yoshua Bengio, and myself — the so-called “Deep Learning conspiracy” — as well as Andrew Ng, started with the idea of using unsupervised learning more than supervised learning. Unsupervised learning could help “pre-train” very deep networks. We had quite a bit of success with this, but in the end, what ended up actually working in practice was good old supervised learning, but combined with convolutional nets, which we had over 20 years ago.
But from a research point of view, what we’ve been interested in is how to do unsupervised learning properly. We now have unsupervised techniques that actually work. The problem is that you can beat them by just collecting more data, and then using supervised learning. This is why in industry, the applications of Deep Learning are currently all supervised. But it won’t be that way in the future.
The bottom line is that the brain is much better than our model at doing unsupervised learning. That means that our artificial learning systems are missing some very basic principles of biological learning. …
… A lot of people are working on what’s called “recurrent neural nets.” These are networks where the output is fed back to the input, so you can have a chain of reasoning. You can use this to process sequential signals, like speech, audio, video, and language. There are preliminary results that are pretty good. The next frontier for Deep Learning is natural language understanding. …
… The question here is how to represent knowledge. In “traditional” AI, factual knowledge is entered manually, often in the form of a graph, that is, a set of symbols or entities and relationships. But we all know that AI systems need to be able to acquire knowledge automatically through learning. The question becomes, “How can machines learn to represent relational and factual knowledge?” Deep Learning is certainly part of the solution, but it’s not the whole answer. The problem with symbols is that a symbol is a meaningless string of bits. In Deep Learning systems, entities are represented by [[large vectors of number]]s that are learned from data and represent their properties. Learning to reason comes down to learning functions that operate on these vectors. …
- QUOTE: LeCun - … A lot of us involved in the resurgence of Deep Learning in the mid-2000s, including Geoff Hinton, Yoshua Bengio, and myself — the so-called “Deep Learning conspiracy” — as well as Andrew Ng, started with the idea of using unsupervised learning more than supervised learning. Unsupervised learning could help “pre-train” very deep networks. We had quite a bit of success with this, but in the end, what ended up actually working in practice was good old supervised learning, but combined with convolutional nets, which we had over 20 years ago.
2014
- (Wikipedia, 2014) ⇒ http://en.wikipedia.org/wiki/Deep_learning Retrieved:2014-8-24.
- Deep learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using model architectures composed of multiple non-linear transformations.[1]
Deep learning is part of a broader family of machine learning methods based on learning representations of data. An observation (e.g., an image) can be represented in many ways (e.g., a vector of pixels), but some representations make it easier to learn tasks of interest (e.g., is this the image of a human face?) from examples, and research in this area attempts to define what makes better representations and how to create models to learn these representations.
Various deep learning architectures such as deep neural networks, convolutional deep neural networks, and deep belief networks have been applied to fields like computer vision, automatic speech recognition, natural language processing, and music/audio signal recognition where they have been shown to produce state-of-the-art results on various tasks.
- Deep learning is a set of algorithms in machine learning that attempt to model high-level abstractions in data by using model architectures composed of multiple non-linear transformations.[1]
- ↑ Y. Bengio, A. Courville, and P. Vincent., "Representation Learning: A Review and New Perspectives," IEEE Trans. PAMI, special issue Learning Deep Architectures, 2013
2013
- (Bengio et al., 2013) ⇒ Yoshua Bengio, Aaron Courville, and Pascal Vincent. (2013). “Representation Learning: A Review and New Perspectives.” In: IEEE Transactions on Pattern Analysis and Machine Intelligence Journal, 35(8). doi:10.1109/TPAMI.2013.50
2010
- (Erhan et al., 2010) ⇒ Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. (2010). “Why Does Unsupervised Pre-training Help Deep Learning?.” In: The Journal of Machine Learning Research, 11.