Neural Network
Jump to navigation
Jump to search
A Neural Network is a network composed of neurons interconnected by links to transmit signals.
- AKA: Neural Net.
- Context:
- It can range from being a Biological Neural Network to being an Artificial Neural Network.
- It can represented by a Neural Network Model.
- It can include mechanisms for adjusting Synaptic Weights or Connection Strength to optimize performance or behavior.
- ...
- Example(s):
- Counter-Example(s):
- A Circulation System, which consists of blood vessels and the heart but lacks neurons or synaptic connections.
- A Gene Regulatory Network, which represents interactions between genes rather than neurons.
- A Logic Circuit, which processes information using Boolean logic gates instead of neuronal connections.
- A Supply Chain Network, which involves the flow of goods and services but lacks any neural processing.
- A Social Network, which models relationships between people, not neurons or artificial nodes.
- A Communication Network, such as the Internet, which involves data packets and routers rather than neural signals.
- A Chemical Reaction Network, which describes chemical species interactions without neural structures.
- ...
- See: Neuroscience, Classifier, Deep Learning, Machine Learning, Unsupervised Learning, Supervised Learning.
References
2023
- (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Artificial_neural_network Retrieved:2023-10-23.
- Artificial neural networks (ANNs, also shortened to neural networks (NNs) or neural nets) are a branch of machine learning models that are built using principles of neuronal organization discovered by connectionism in the biological neural networks constituting animal brains.
2017
- (Miikkulainen, 2017) ⇒ Miikkulainen R. (2017) "Topology of a Neural Network". In: Sammut, C., Webb, G.I. (eds) "Encyclopedia of Machine Learning and Data Mining". Springer, Boston, MA
- QUOTE: Topology of a neural network refers to the way the neurons are connected, and it is an important factor in how the network functions and learns. A common topology in unsupervised learning is a direct mapping of inputs to a collection of units that represents categories (e.g., Self-Organizing Maps). The most common topology in supervised learning is the fully connected, three-layer, feedforward network (see Backpropagation and Radial Basis Function Networks): All input values to the network are connected to all neurons in the hidden layer (hidden because they are not visible in the input or output), the outputs of the hidden neurons are connected to all neurons in the output layer, and the activations of the output neurons constitute the output of the whole network. Such networks are popular partly because they are known theoretically to be universal function approximators (with, e.g., a sigmoid or Gaussian nonlinearity in the hidden layer neurons), although networks with more layers may be easier to train in practice (e.g., Cascade-Correlation). In particular, deep learning architectures (see Deep Learning) utilize multiple hidden layers to form a hierarchy of gradually more structured representations that support a supervised task on top. Layered networks can be extended to processing sequential input and/or output by saving a copy of the hidden layer activations and using it as additional input to the hidden layer in the next time step (see Simple Recurrent Network). Fully recurrent topologies, where each neuron is connected to all other neurons (and possibly to itself), can also be used to model time-varying behavior, although such networks may be unstable and difficult to train (e.g., with backpropagation; but see also Boltzmann Machines). Modular topologies, where different parts of the networks perform distinctly different tasks, can improve stability and can also be used to model high-level behavior (e.g., Echo-State Machines and Adaptive Resonance Theory). Whatever the topology, in most cases, learning involves modifying the Weight on the network connections. However, arbitrary network topologies are possible as well and can be constructed as part of the learning (e.g., with backpropagation or Neuroevolution) to enhance feature selection, recurrent memory, abstraction, or generalization.
2009
- (WordNet, 2009) ⇒ http://wordnetweb.princeton.edu/perl/webwn?s=neural%20network
- S: (n) neural network, neural net (computer architecture in which processors are connected in a manner suggestive of connections between neurons; can learn by trial and error)
- S: (n) neural network, neural net (any network of neurons or nuclei that function together to perform some function in the body)
- http://en.wikipedia.org/wiki/Category:Neural_networks
- http://en.wikipedia.org/wiki/Neural_network
- Traditionally, the term neural network had been used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term has two distinct usages:
- Biological neural networks are made up of real biological neurons that are connected or functionally related in the peripheral nervous system or the central nervous system. In the field of neuroscience, they are often identified as groups of neurons that perform a specific physiological function in laboratory analysis.
- Artificial neural networks are made up of interconnecting artificial neurons (programming constructs that mimic the properties of biological neurons). Artificial neural networks may either be used to gain an understanding of biological neural networks, or for solving artificial intelligence problems without necessarily creating a model of a real biological system. The real, biological nervous system is highly complex and includes some features that may seem superfluous based on an understanding of artificial networks.
- This article focuses on the relationship between the two concepts; for detailed coverage of the two different concepts refer to the separate articles: Biological neural network and Artificial neural network.
- Traditionally, the term neural network had been used to refer to a network or circuit of biological neurons. The modern usage of the term often refers to artificial neural networks, which are composed of artificial neurons or nodes. Thus the term has two distinct usages:
1996
- Leslie Smith. (1996). “An Introduction to Neural Networks.”
1958
- (Rosenblatt, 1958) ⇒ Frank Rosenblatt. (1958). “The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain.” Psychological Review, 65(6).