Hebbian Learning Algorithm
(Redirected from Hebbian learning)
Jump to navigation
Jump to search
A Hebbian Learning Algorithm is a neural learning algorithm where a synapse is strengthened according to a Hebb rule (when the neurons on either side of the synapse (input and output) have highly correlated outputs).
- Context:
- It can (typically) be applied in unsupervised learning contexts where the system identifies patterns and correlations without explicit external rewards.
- It can (typically) lead to a gradual strengthening of synaptic connections, making it suitable for modeling processes like habit formation or the gradual learning of new skills.
- It can (often) serve as the foundation for associative learning models, where simple associations between stimuli and responses are formed.
- ...
- It can be summarized by the phrase "cells that fire together wire together," which implies that when two neurons are activated simultaneously, the connection between them is reinforced.
- It can be utilized in neural network models to simulate the learning process in biological systems, particularly in forming long-term memories.
- It can be closely related to Spike-Timing-Dependent Plasticity (STDP), a more detailed form of Hebbian learning that takes into account the precise timing of neuron spikes.
- It can contribute to the development of Self-Organizing Maps (SOMs), which rely on Hebbian-like learning to cluster similar input data together.
- It can be limited in its ability to handle complex, high-dimensional data without additional mechanisms like competitive learning or normalization.
- It can be used in models that simulate the effects of learning and memory in the hippocampus and other areas of the brain involved in spatial learning.
- ...
- Example(s):
- a model of classical conditioning where an initially neutral stimulus becomes associated with a significant event due to the simultaneous activation of the neurons representing the stimulus and the event.
- a neural network that uses Hebbian learning to reinforce connections between neurons that are frequently co-activated in response to specific patterns in the input data.
- ...
- Counter-Example(s):
- Reinforcement Learning Algorithms, where learning is driven by feedback signals or rewards rather than direct correlations between inputs and outputs.
- a Perceptron Training Algorithm, which uses a supervised approach and error-correction learning rather than correlation-based synaptic strengthening.
- a Backpropagation-of-Error (Backprop) Algorithm, which adjusts weights based on the error gradient rather than correlated activity.
- Evolutionary Algorithms, which optimize neural networks through genetic operations like mutation and crossover, independent of neuron co-activation.
- See: Synaptic Plasticity, Spike-Timing-Dependent Plasticity, Dimensionality Reduction, Self-Organizing Maps, Associative Learning, Unsupervised Learning.
References
2015
- http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Hebbian_Learning
- QUOTE: Hebbian learning is one of the oldest learning algorithms, and is based in large part on the dynamics of biological systems. A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs. In essence, when an input neuron fires, if it frequently leads to the firing of the output neuron, the synapse is strengthened. Following the analogy to an artificial system, the tap weight is increased with high correlation between two sequential neurons.
2011
- (Sammut & Webb, 2011) ⇒ Claude Sammut, and Geoffrey I. Webb. (2011). “Hebbian Learning.” In: (Sammut & Webb, 2011) p.493
2008
- (Caporale & Dan, 2008) ⇒ Natalia Caporale, and Yang Dan. (2008). “Spike Timing-dependent Plasticity: A Hebbian Learning Rule." Annu. Rev. Neurosci. 31
- QUOTE: Spike timing–dependent plasticity (STDP) as a Hebbian synaptic learning rule has been demonstrated in various neural circuits over a wide spectrum of species, from insects to humans.
2000
- (Song et al., 2000) ⇒ Sen Song, Kenneth D. Miller, and Larry F. Abbott. (2000). “Competitive Hebbian Learning through Spike-timing-dependent Synaptic Plasticity." Nature neuroscience 3, no. 9
- QUOTE: Hebbian models of development and learning require both activity-dependent synaptic plasticity and a mechanism that induces competition between different synapses. …
1996
- (Montagueead et al., 1996) ⇒ P. R. Montagueead, Peter Dayan, and Terrence J. Sejnowski. (1996). “A Framework for Mesencephalic Dopamine Systems based on Predictive Hebbian Learning." The Journal of neuroscience 16, no. 5