Spiking Neural Network (SNN)

From GM-RKB
(Redirected from spiking neural network)
Jump to navigation Jump to search

A Spiking Neural Network (SNN) is a type of artificial neural network that more closely mimics the behavior of biological neural networks by incorporating the concept of time into its operations.



References

2024

  • (Wikipedia, 2024) ⇒ https://en.wikipedia.org/wiki/Spiking_neural_network Retrieved:2024-8-15.
    • Spiking neural networks (SNNs) are artificial neural networks (ANN) that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not transmit information at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather transmit information only when a membrane potential—an intrinsic quality of the neuron related to its membrane electrical charge—reaches a specific value, called the threshold. When the membrane potential reaches the threshold, the neuron fires, and generates a signal that travels to other neurons which, in turn, increase or decrease their potentials in response to this signal. A neuron model that fires at the moment of threshold crossing is also called a spiking neuron model. Although it was previously believed that the brain encoded information through spike rates, which can be considered as the analogue variable output of a traditional ANN, research in the field of neurobiology has indicated that high speed processing cannot solely be performed through a rate based scheme. For example humans can perform an image recognition task at rate requiring no more than 10ms of processing time per neuron through the successive layers (going from the retina to the temporal lobe). This time window is too short for a rate based encoding. The precise spike timings in a small set of spiking neurons also has a higher information coding capacity compared with a rate based approach. The most prominent spiking neuron model is the leaky integrate-and-fire model.[1] In the integrate-and-fire model, the momentary activation level (modeled as a differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher or lower, until the state eventually either decays or—if the firing threshold is reached—the neuron fires. After firing, the state variable is reset to a lower value.

      Various decoding methods exist for interpreting the outgoing spike train as a real-value number, relying on either the frequency of spikes (rate-code), the time-to-first-spike after stimulation, or the interval between spikes.

2018

  • (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Spiking_neural_network Retrieved:2018-9-3.
    • Spiking neural networks (SNNs) are artificial neural network models that more closely mimic natural neural networks. In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.

      In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.

  1. Cite error: Invalid <ref> tag; no text was provided for refs named Ganguly 1–13