Hebbian Learning Algorithm

From GM-RKB
Jump to navigation Jump to search

A Hebbian Learning Algorithm is a neural learning algorithm where a synapse is strengthened according to a Hebb rule (when the neurons on either side of the synapse (input and output) have highly correlated outputs).

  • Context:
    • It can (typically) be applied in unsupervised learning contexts where the system identifies patterns and correlations without explicit external rewards.
    • It can (typically) lead to a gradual strengthening of synaptic connections, making it suitable for modeling processes like habit formation or the gradual learning of new skills.
    • It can (often) serve as the foundation for associative learning models, where simple associations between stimuli and responses are formed.
    • ...
    • It can be summarized by the phrase "cells that fire together wire together," which implies that when two neurons are activated simultaneously, the connection between them is reinforced.
    • It can be utilized in neural network models to simulate the learning process in biological systems, particularly in forming long-term memories.
    • It can be closely related to Spike-Timing-Dependent Plasticity (STDP), a more detailed form of Hebbian learning that takes into account the precise timing of neuron spikes.
    • It can contribute to the development of Self-Organizing Maps (SOMs), which rely on Hebbian-like learning to cluster similar input data together.
    • It can be limited in its ability to handle complex, high-dimensional data without additional mechanisms like competitive learning or normalization.
    • It can be used in models that simulate the effects of learning and memory in the hippocampus and other areas of the brain involved in spatial learning.
    • ...
  • Example(s):
    • a model of classical conditioning where an initially neutral stimulus becomes associated with a significant event due to the simultaneous activation of the neurons representing the stimulus and the event.
    • a neural network that uses Hebbian learning to reinforce connections between neurons that are frequently co-activated in response to specific patterns in the input data.
    • ...
  • Counter-Example(s):
  • See: Synaptic Plasticity, Spike-Timing-Dependent Plasticity, Dimensionality Reduction, Self-Organizing Maps, Associative Learning, Unsupervised Learning.


References

2015

  • http://en.wikibooks.org/wiki/Artificial_Neural_Networks/Hebbian_Learning
    • QUOTE: Hebbian learning is one of the oldest learning algorithms, and is based in large part on the dynamics of biological systems. A synapse between two neurons is strengthened when the neurons on either side of the synapse (input and output) have highly correlated outputs. In essence, when an input neuron fires, if it frequently leads to the firing of the output neuron, the synapse is strengthened. Following the analogy to an artificial system, the tap weight is increased with high correlation between two sequential neurons.

2011

2008

2000

1996