Radial Basis Function Neural Network
Jump to navigation
Jump to search
A Radial Basis Function Neural Network is a single hidden-layer neural network that uses radial basis functions as activation functions.
- AKA: RBF, Radial Basis Network.
- Context:
- It can range from being a Single-layer Radial Basis Function Neural Network to being a Multi-Layer Radial Basis Function Neural Network.
- It can be produced by a Radial Basis Function Network Training System (that implements a Radial Basis Function Neural Network Training Algorithm)
- Example(s):
- Counter-Example(s):
- a Recurrent NNet.
- a Hopfield NNet.
- a Feed-Forward NNet.
- See: Linear Combination, Function Approximation, Regularization; Support Vector Machines.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/radial_basis_function_network Retrieved:2015-1-18.
- In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.
2014
- http://reference.wolfram.com/applications/neuralnetworks/NeuralNetworkTheory/2.5.2.html
- QUOTE: After the FF networks, the radial basis function (RBF) network comprises one of the most used network models.
Figure 2.7 illustrates an RBF network with inputs ,..., and output . The arrows in the figure symbolize parameters in the network. The RBF network consists of one hidden layer of basis functions, or neurons. At the input of each neuron, the distance between the neuron center and the input vector is calculated. The output of the neuron is then formed by applying the basis function to this distance. The RBF network output is formed by a weighted sum of the neuron outputs and the unity bias shown.
- QUOTE: After the FF networks, the radial basis function (RBF) network comprises one of the most used network models.
2013
- (Kaushik, 2013) ⇒ Saroj Kaushik. (2013). “Artificial Neural Network - Lecture Model 22”. Course Material
- QUOTE: A function is said to be a radial basis function (RBF) if its output depends on the distance of the input from a given stored vector.
- The RBF neural network has an input layer, a hidden layer and an output layer.
- In such RBF networks, the hidden layer uses neurons with RBFs as activation functions.
- The outputs of all these hidden neurons are combined linearly at the output node.
- These networks have a wide variety of applications such as
- function approximation,
- time series prediction,
- control and regression,
- pattern classification tasks for performing complex (non-linear).
- QUOTE: A function is said to be a radial basis function (RBF) if its output depends on the distance of the input from a given stored vector.
2011
- (Buhmann, 2011) ⇒ M.D. Buhmann. (2011). “Radial Basis Function Networks.” In: (Sammut & Webb, 2011) p.823
2000
- (Evgeniou et al., 2000) ⇒ Theodorus Evgeniou, Massimiliano Pontil, and Tomaso Poggio. (2000). “Regularization Networks and Support Vector Machines.” In: Advances in Computational Mathematics, 13(1).
- QUOTE: Regularization Networks and Support Vector Machines are techniques for solving certain problems of learning from examples.