Artificial Neural Network (ANN)
An Artificial Neural Network (ANN) is a neural network composed of artificial neurons and artificial neural connections.
- AKA: Connectionist System.
- Context:
- It can (often) be represented by a Artifificial Neural Network Model.
- ...
- It can range from being an Untrained Neural Network to being a Trained Neural Network.
- It can range from being a Single Layer Neural Network, to being a Single Hidden-Layer Neural Network (2-Layer Neural Network), to being Multi Hidden-Layer Neural Network (N-Layer Neural Network).
- It can range from being a Feedforward Neural Network to being a Recurrent Neural Network.
- It can range from being an Unsupervised Artificial Neural Network to being a Supervised Artificial Neural Network.
- It can range from being a Shallow Neural Network to being a Deep Neural Network.
- It can range from being a Unidirectional Artificial Neural Network to being a Bidirectional Artificial Neural Network.
- It can range from being a Small NNet to being a Medium NNet to being a Large NNet.
- It can be produced by a Neural Network Training System (that implements a neural network training algorithm to solve a neural network training task).
- It can be graphically represented by Neural Network Topology.
- …
- Example(s):
- a Spiking Neural Network,
- an Unsupervised Artificial Neural Network such as: a Self-Organizing Map, or a Generative Adversarial Network.
- a Supervised Artificial Neural Network such as: a Feed-Forward NNet, or a Boltzmann Machine.
- a Deep Neural Network such as: a Convolutional Neural Network, a Recurrent Neural Network, a Recursive Neural Network, a Deep Belief Neural Network, a Transformer Neural Network.
- a Fuzzy Neural Network such as: a Cooperative Fuzzy Neural Network,
- a Dynamic Memory Network
- a Language Model Neural Network.
- a Graph Data Neural Network.
- a Feedforward Neural Network.
- a Restricted Boltzmann Machine.
- a Recursive Neural Network.
- a Recurrent Neural Network.
- a Convolutional Neural Network.
- a Dynamic Coattention Network.
- a Deep Belief Network.
- a Simple Recurrent Network.
- a Self-Organizing Map.
- …
- Counter-Example(s):
- See: Fully-Connected Neural Network Layer, Neural Network Pattern, Model Instance, Adaptive Resonance Theory, Backpropagation, Synaptic Plasticity, Hebb Rule, Spike Timing Dependent Plasticity, Cascade Correlation, Competitive Learning, Evolving Neural Networks, Reservoir Computing.
References
2018
- (Brilliant, 2018) ⇒ Artificial Neural Network. Brilliant.org. Retrieved 17:49, September 2, 2018, from https://brilliant.org/wiki/artificial-neural-network/
- QUOTE: Artificial neural networks (ANNs) are computational models inspired by the human brain. They are comprised of a large number of connected nodes, each of which performs a simple mathematical operation. Each node's output is determined by this operation, as well as a set of parameters that are specific to that node. By connecting these nodes together and carefully setting their parameters, very complex functions can be learned and calculated.
Artificial neural networks are responsible for many of the recent advances in artificial intelligence, including voice recognition, image recognition, and robotics. For example, ANNs can perform image recognition on hand drawn digits. An interactive example can be found here.
A simple artificial neural network. The first column of circles represents the ANN's inputs, the middle column represents computational units that act on that input, and the third column represents the ANN's output. Lines connecting circles indicate dependencies.
- QUOTE: Artificial neural networks (ANNs) are computational models inspired by the human brain. They are comprised of a large number of connected nodes, each of which performs a simple mathematical operation. Each node's output is determined by this operation, as well as a set of parameters that are specific to that node. By connecting these nodes together and carefully setting their parameters, very complex functions can be learned and calculated.
2017a
- (Sammut & Webb, 2017) ⇒ Sammut, C., Webb, G.I. (2017) "Artificial Neural Networks". In: "Encyclopedia of Machine Learning and Data Mining". Springer, Boston, MA.
- QUOTE: (ANNs) is a computational model based on biological neural networks. It consists of an interconnected group of artificial neurons and processes information using a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network during the learning phase.
2017b
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Artificial_neural_network Retrieved:2017-12-17.
- Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance on) tasks by considering examples, generally without task-specific programming.
For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the results to identify cats in other images. They do this without any a prior knowledge about cats, e.g., that they have fur, tails, whiskers and cat-like faces. Instead, they evolve their own set of relevant characteristics from the learning material that they process.
An ANN is based on a collection of connected units or nodes called artificial neurons (analogous to biological neurons in an animal brain). Each connection (synapse) between neurons can transmit a signal from one to another. The receiving (postsynaptic) neuron can process the signal(s) and then signal neurons connected to it.
In common ANN implementations, the synapse signal is a real number, and the output of each neuron is calculated by a non-linear function of the sum of its inputs. Neurons and synapses typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal that it sends across the synapse. Neurons may have a threshold such that only if the aggregate signal crosses that threshhold is the signal sent.
Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.
The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology.
Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis
- Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve performance on) tasks by considering examples, generally without task-specific programming.
2015a
- (Goldberg, 2015) ⇒ Yoav Goldberg. (2015). “A Primer on Neural Network Models for Natural Language Processing.” In: Technical Report Journal, October 5, 2015.
- ABSTRACT: Over the past few years, neural networks have re-emerged as powerful machine-learning models, yielding state-of-the-art results in fields such as image recognition and speech processing. More recently, neural network models started to be applied also to textual natural language signals, again with very promising results. This tutorial surveys neural network models from the perspective of natural language processing research, in an attempt to bring natural-language researchers up to speed with the neural techniques. The tutorial covers input encoding for natural language tasks, feed-forward networks, convolutional networks, recurrent networks and recursive networks, as well as the computation graph abstraction for automatic gradient computation.
2015b
- (Voosen,2015) ⇒ Paul Voosen (2015). "The Believers: The hidden story behind the code that runs our lives" In: The Chronicle Review - The Chronicle of Higher Education.
- QUOTE: It was a stunning result. These neural nets were little different from what existed in the 1980s. This was simple supervised learning. It didn’t even require Hinton’s 2006 breakthrough. It just turned out that no other algorithm scaled up like these nets. “Retrospectively, it was a just a question of the amount of data and the amount of computations," Hinton says.
2015c
- (Garling,2015) ⇒ Caleb Garling (2015). "Google Brain’s Co-inventor Tells Why He’s Building Chinese Neural Networks. Andrew Ng on the state of deep learning at Baidu" In: Backchannel.
2009
- (Wilson, 2009) ⇒ Bill Wilson, (1998 - 2012). Neural Network In: The Machine Learning Dictionary Retrieved: 2009.
- An artificial neural network is a collection of simple artificial neurons connected by directed weighted connections. When the system is set running, the activation levels of the input units is clamped to desired values. After this the activation is propagated, at each time step, along the directed weighted connections to other units. The activations of non-input neurons are computing using each neuron's activation function. The system might either settle into a stable state after a number of time steps, or in the case of a feedforward network, the activation might flow through to output units.
Learning might or might not occur, depending on the type of neural network and the mode of operation of the network.
- An artificial neural network is a collection of simple artificial neurons connected by directed weighted connections. When the system is set running, the activation levels of the input units is clamped to desired values. After this the activation is propagated, at each time step, along the directed weighted connections to other units. The activations of non-input neurons are computing using each neuron's activation function. The system might either settle into a stable state after a number of time steps, or in the case of a feedforward network, the activation might flow through to output units.
2005
- (Golda, 2005) ⇒ Adam Golda (2005). "Introduction to neural networks"
- QUOTE: There are different types of neural networks, which can be distinguished on the basis of their structure and directions of signal flow. Each kind of neural network has its own method of training. Generally, neural networks may be differentiated as follows
- Feedforward neural networks, which typical example is one-layer perceptron (see figure of Single-layer perceptron), consist of neurons set in layers. The information flow has one direction. Neurons from a layer are connected only with the neurons from the preceding layer. The multi-layer networks usually consist of input, hidden (one or more), and output layers. Such system may be treated as non-linear function approximation block: [math]\displaystyle{ y = f(u) }[/math].
Recurrent neural networks. Such networks have feedback loops (at least one) output signals of a layer are connected to its inputs. It causes dynamic effects during network work. Input signals of layer consist of input and output states (from the previous step) of that layer. The structure of recurrent network depicts the below figure.
Cellular networks. In this type of neural networks neurons are arranged in a lattice. The connections (usually non-linear) may appear between the closest neurons. The typical example of such networks is Kohonen Self-Organising-Map.
.
- Feedforward neural networks, which typical example is one-layer perceptron (see figure of Single-layer perceptron), consist of neurons set in layers. The information flow has one direction. Neurons from a layer are connected only with the neurons from the preceding layer. The multi-layer networks usually consist of input, hidden (one or more), and output layers. Such system may be treated as non-linear function approximation block: [math]\displaystyle{ y = f(u) }[/math].
2000
- (Valpola, 2000) ⇒ Harri Valpola. (2000). “Bayesian Ensemble Learning for Nonlinear Factor Analysis.” PhD Dissertation, Helsinki University of Technology.
- QUOTE: artificial neural network: A model which consists of simple building-blocks. The development of such models has been inspired by neurobiological findings. The building-blocks are termed neurons in analogy to biological brain.
1999
- (Zaiane, 1999) ⇒ Osmar Zaiane. (1999). “Glossary of Data Mining Terms.” University of Alberta, Computing Science CMPUT-690: Principles of Knowledge Discovery in Databases.
- QUOTE: Artificial Neural Networks: Non-linear predictive models that learn through training and resemble biological neural networks in structure.