Neocognitron
A Neocognitron Network is an Deep Artificial Neural Network that is composed of S-cells and C-cells.
- Context:
- It can also be composed of V-cells.
- It ranges from being an Unsupervised Neocognitron to being a Supervised Neocognitron.
- It inspired the creation Convolutional Neural Networks.
- It can be simulated and produced by Neocognitron Training System.
- Example(s):
- Counter-Example(s):
- See: Self-Organizing Map (SOM), Selective Attention, Artificial Neural Network, Handwriting Recognition, Pattern Recognition, Convolutional Neural Network, Simple Cell, Complex Cell, LeNet, Scale-Invariant Feature Transform, Biological Object Recognition, Error Back-Propagation, Perceptron, Supervised Machine Learning, Unsupervised Machine Learning, Visual Cortex.
References
2018a
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Neocognitron Retrieved:2018-7-22.
- The neocognitron is a hierarchical, multilayered artificial neural network proposed by Kunihiko Fukushima in the 1980s. It has been used for handwritten character recognition and other pattern recognition tasks, and served as the inspiration for convolutional neural networks. The neocognitron was inspired by the model proposed by Hubel & Wiesel in 1959. They found two types of cells in the visual primary cortex called simple cell and complex cell, and also proposed a cascading model of these two types of cells for use in pattern recognition tasks. The neocognitron is a natural extension of these cascading models. The neocognitron consists of multiple types of cells, the most important of which are called S-cells and C-cells. [1] The local features are extracted by S-cells, and these features' deformation, such as local shifts, are tolerated by C-cells. Local features in the input are integrated gradually and classified in the higher layers. [2] The idea of local feature integration is found in several other models, such as the LeNet model and the SIFT model. There are various kinds of neocognitron. [3] For example, some types of neocognitron can detect multiple patterns in the same input by using backward signals to achieve selective attention. [4]
2015
- (Liang & Hu, 2015) ⇒ Ming Liang, and Xiaolin Hu. (2015). “Recurrent Convolutional Neural Network for Object Recognition.” In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3367-3375.
- QUOTE: ... CNN is a type of artificial neural network, which originates from neuroscience dating back to the proposal of the first artificial neuron in 1943 [34]. In fact, CNN, as well as other hierarchical models including Neocognitron [13] and HMAX [38], is closely related to Hubel and Wiesel’s findings about simple cells and complex cells in the primary visual cortex (V1)[23, 22]. All of these models have purely feed-forward architectures, which can be viewed as crude approximations of the biological neural network in the brain. Anatomical evidences have shown that recurrent connections ubiquitously exist in the neocortex, and recurrent synapses typically outnumber feed-forward and top-down (or feedback) synapses [6]. ...
2007
- (Fukushima, 2007) ⇒ Kunihiko Fukushima (2007) "Neocognitron". Scholarpedia, 2(1):1717. doi:10.4249/scholarpedia.1717.
- QUOTE: The neocognitron, proposed by Fukushima (1980), is a hierarchical multilayered neural network capable of robust visual pattern recognition through learning (Fukushima, 1988; 2003).
Outline of the Neocognitron
Figure 1 shows a typical architecture of the network. The lowest stage is the input layer consisting of two-dimensional array of cells, which correspond to photoreceptors of the retina. There are retinotopically ordered connections between cells of adjoining layers. Each cell receives input connections that lead from cells situated in a limited area on the preceding layer. Layers of “S-cells" and “C-cells" are arranged alternately in the hierarchical network. (In the network shown in Figure 1, a contrast-extracting layer is inserted between the input layer and the S-cell layer of the first stage).
Figure 1: A typical architecture of the neocognitron.
- QUOTE: The neocognitron, proposed by Fukushima (1980), is a hierarchical multilayered neural network capable of robust visual pattern recognition through learning (Fukushima, 1988; 2003).
1988
- (Fukushima, 1988) ⇒ Kunihiko Fukushima (1988). "Neocognitron: A hierarchical neural network capable of visual pattern recognition." (PDF) Neural networks, 1(2), 119-130.
- QUOTE: The network has four stages of layers of S- and C-cells. The number of S- or C-cells in each layer is indicated in Figure 2 . Layer Uc4 at the highest stage has ten cell-planes, each of which has only one C-cell. These ten C-cells correspond to ten numeral patterns from "0" to "9."
Figure 7 shows how the cells of different cell-planes are spatially interconnected. This figure, in which only one cell-plane is drawn for each layer, illustrates a one-dimensional cross-section of the connections between S- and C-cells. From this figure, we can read, for example, an S-cell of layer Us3 has 5 × 5 (excitatory) variable input connections from each cell-plane of layer Uc2. Since layer Uc2 has 19 cell-planes, the maximum possible number of the variable input connections to each S-cell of layer US3 is 5 × 5 × 19. It is important to note, however, that all of these 5 × 5 × 19 variable connections are not necessarily reinforced by learning. On the contrary, most of them usually remain at the initial state of strength of zero even after finishing learning. Since the variable connections of strength of zero need not be actually wired in the network, the effective number of connections are far less than the value directly read from this figure.
Figure 7. One-dimensional view of interconnections between cells of different cell-planes. Only one cell-plane is drawn in each layer.
- QUOTE: The network has four stages of layers of S- and C-cells. The number of S- or C-cells in each layer is indicated in Figure 2 . Layer Uc4 at the highest stage has ten cell-planes, each of which has only one C-cell. These ten C-cells correspond to ten numeral patterns from "0" to "9."
1980
- (Fukushima & Miyake,1980) ⇒ Kunihiko Fukushima (1980). "Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition." In Competition and cooperation in neural nets (pp. 267-285). Springer, Berlin, Heidelberg. DOI: 10.1007/BF00344251.
- QUOTE: ... the neocognitron consists of a cascade connection of a number of modular structures preceded by an input layer [math]\displaystyle{ U_o }[/math]. Each of the modular structure is composed of two layers of cells connected in a cascade. The first layer of the module consists of “S-cells", which correspond to simple cells or lower order hypercomplex cells according to the classification of Hubel and Wiesel.