Supervised Artificial Neural Network
(Redirected from Supervised Neural Network)
Jump to navigation
Jump to search
A Supervised Artificial Neural Network is an artificial neural network that is trained by a supervised learning system.
- Example(s):
- Counter-Example(s):
- an Unsupervised ANN, such as a Self-Organizing Map, or a Generative Adversarial Network;
- See: Deep Learning Artificial Neural Network, Neural Network, Adaptive Resonance Theory, Backpropagation, Synaptic Plasticity, Hebb Rule, Spike Timing Dependent Plasticity, Cascade Correlation, Competitive Learning, Evolving Neural Networks, Reservoir Computing.
References
2017a
- (Miikkulainen, 2017) ⇒ Miikkulainen R. (2017) "Topology of a Neural Network". In: Sammut, C., Webb, G.I. (eds) "Encyclopedia of Machine Learning and Data Mining". Springer, Boston, MA
- ABSTRACT: Topology of a neural network refers to the way the neurons are connected, and it is an important factor in how the network functions and learns. A common topology in unsupervised learning is a direct mapping of inputs to a collection of units that represents categories (e.g., Self-Organizing Maps). The most common topology in supervised learning is the fully connected, three-layer, feedforward network (see Backpropagation and Radial Basis Function Networks): All input values to the network are connected to all neurons in the hidden layer (hidden because they are not visible in the input or output), the outputs of the hidden neurons are connected to all neurons in the output layer, and the activations of the output neurons constitute the output of the whole network. Such networks are popular partly because they are known theoretically to be universal function approximators (with, e.g., a sigmoid or Gaussian nonlinearity in the hidden layer neurons), although networks with more layers may be easier to train in practice (e.g., Cascade-Correlation).