ADALINE Training System
A ADAptive LInear NEuron Training System is a Single-Layer ANN Training System based on the ADALINE Neural Network.
- AKA: Adaline Training System.
- Context:
- It can solve ADALINE Training Task by implementing a ADALINE Training Algorithm.
- It was first developed by Widrow (1960).
- …
- Example(s):
- Adaline Classifier (Raschka, 2015) ⇒
class AdalineGD(object)
[1]
- Adaline Classifier (Raschka, 2015) ⇒
- Counter-Example(s):
- See: Adaline Learning Rule, Neural Network Learning Rate, Artificial Neural Network, Neural Network Layer, Artificial Neuron, Neuron Activation Function, Neural Network Topology.
References
2015
- (Raschka, 2015) ⇒ Raschka, S. (2015). “Chapter 2: Training Machine Learning Algorithms for Classification". In:"Python Machine Learning: Unlock Deeper Insights Into Machine Learning with this Vital Guide to Cutting-edge Predictive Analytics". Community experience distilled Series. Packt Publishing Ltd. ISBN:9781783555130 pp. 17-47.
- QUOTE: The key difference between the Adaline rule (also known as the Widrow-Hoff rule) and Rosenblatt's perceptron is that the weights are updated based on a linear activation function rather than a unit step function like in the perceptron. In Adaline, this linear activation function [math]\displaystyle{ \varphi(z) }[/math] is simply the identity function of the net input so that [math]\displaystyle{ \varphi(w^Tx)=w^Tx }[/math].
While the linear activation function is used for learning the weights, a quantizer, which is similar to the unit step function that we have seen before, can then be used to predict the class labels, as illustrated in the following figure:
If we compare the preceding figure to the illustration of the perceptron algorithm that we saw earlier, the difference is that we know to use the continuous valued output from the linear activation function to compute the model error and update the weights, rather than the binary class labels.
- QUOTE: The key difference between the Adaline rule (also known as the Widrow-Hoff rule) and Rosenblatt's perceptron is that the weights are updated based on a linear activation function rather than a unit step function like in the perceptron. In Adaline, this linear activation function [math]\displaystyle{ \varphi(z) }[/math] is simply the identity function of the net input so that [math]\displaystyle{ \varphi(w^Tx)=w^Tx }[/math].
1960
- (Widrow, 1960) ⇒ Bernard Widrow. (1960). “Adaptive "Adaline" Neuron Using Chemical "memistors".” Number Technical Report 1553-2. Stanford Electron. Labs. Stanford, CA.
- SUMMARY: A new circuit element called a “memistor” (a resistor with memory) has been devised that will have general use in adaptive circuits. With such an element it is possible to get an electronically variable gain control along with the memory required for storage of the system‘s experiences or training. Experiences are stored in their most compact form, and in a form that is directly usable from the standpoint of system functioning. The element consists of a resistive graphite substrate immersed in a plating bath. The resistance is reversibly controlled by electroplating.
The memistor element has been applied to the realization of adaptive neurons, Memistor circuits for the "Adeline" neuron, which incorporate its simple adaption procedure, have been developed. It has been possible to train these neurons so that this training will remain effective for weeks. Steps have been taken toward the miniaturization or the memistor element. The memistor promises to be a cheap, reliable, mass-producible, adaptive-system element.
- SUMMARY: A new circuit element called a “memistor” (a resistor with memory) has been devised that will have general use in adaptive circuits. With such an element it is possible to get an electronically variable gain control along with the memory required for storage of the system‘s experiences or training. Experiences are stored in their most compact form, and in a form that is directly usable from the standpoint of system functioning. The element consists of a resistive graphite substrate immersed in a plating bath. The resistance is reversibly controlled by electroplating.
1943
- (McCulloch & Pitts, 1943) ⇒ McCulloch, W. S., & Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4), 115-133.
- ABSTRACT: Because of the “all-or-none” character of nervous activity, neural events and the relations among them can be treated by means of propositional logic. It is found that the behavior of every net can be described in these terms, with the addition of more complicated logical means for nets containing circles; and that for any logical expression satisfying certain conditions, one can find a net behaving in the fashion it describes. It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time. Various applications of the calculus are discussed.