Constrained Error Back Propagation Network (CEBPN)
(Redirected from constrained error back propagation (CEBP))
Jump to navigation
Jump to search
A Constrained Error Back Propagation Network (CEBPN) is a Multi-Layer Perceptron that consists of an input layer, an output layer and a hidden layer of local basis function nodes.
- Context:
- It is constructed from Locally Responsive Units (LRU) activation functions.
- It is based on Constrained Error Backpropagation Algorithm.
- It can usually be trained for performing a Rule Learning Task.
- Example(s):
- Counter-Example(s):
- See: Artificial Neural Network, Rule Extraction Algorithm, Backpropagation, RULEX Algorithm, Error Backpropagation Learning Task.
References
2005
- (Nayak, 2005) ⇒ Richi Nayak (2005, July). "Generating Predicate Rules from Neural Networks". In: Proceedings of International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2005). DOI:10.1007/11508069_31.
- QUOTE: Several neural network learning techniques such as cascade correlation (CC), BpTower (BT) and constrained error back propagation (CEBP) are utilised to build networks. This is to show the the applicability of predicate (or restricted first-order) rule extraction to a variety of ANN architectures.
1999
- (Nayak, 1999) ⇒ Richi Nayak (1999). "GYAN: A Methodology For Rule Extraction From Artificial Neural Networks". Doctoral dissertation, Queensland University of Technology.
- QUOTE: The CEBPN consists of an input layer, an output layer and a hidden layer of local basis function nodes (Figure 4.5). The hidden nodes are sigmoidals forming locally responsive units, rather than Gaussian units, that have the effect of partitioning the training data into a set of disjoint regions, each region being represented by a single hidden layer node. A linear combination of such hidden layer nodes is able to approximate the target concepts of a problem domain. A set of ridges forms a local unit (hidden layer node), one ridge for each dimension of the input space. A ridge produces appreciable output (the thresholded sum of the activations of the sigmoids) only if the value presented as input lies within the active range of the ridge. For each ridge (in the ith dimension), the axis parallel sigmoids are parameterized according to the centre, breadth and edge steepness of each ridge.
1998
- (Cobertt-Clark, 1998) ⇒ Timothy A. Corbett-Clark (1998). "Explanation from Neural Networks". Doctoral dissertation. University of Oxford.
- QUOTE: Andrews and Geva, [AG96b, AG96a`, describe a method of rule extraction (RULEX) which works on Constrained Error Back Propagation (CEBP) networks. These networks are constructed from activation functions called Locally Responsive Units (LRU), which are themselves constructed from the difference between two offset sigmoid functions. Each LRU forms a “ridge” in input space, and the weights of the network are constrained such that these ridges are axis-aligned. CEBP networks are designed for incremental constructive training, which means that new local response units are added when no further significant decrease in error is possible. Andrews and Geva explain that this allows semantics to be attached to the hidden nodes and knowledge to be inserted into the network. After training, the RULEX algorithm is used to determine the active range of each ridge and map this range into the constraints of a rule describing an axis-aligned hypercube in input space. Heuristics are used to remove redundant rule conditions and redundant rules. Results are given from applying the RULEX algorithm to three datasets: the Fisher IRIS data, Mushroom, and a Heart disease dataset. Small numbers of rules with a high accuracy are reported.
1996a
- (Andrews & Geva, 1996) ⇒ Robert Andrews, and Shlomo Geva (1996). "Rules and Local Function Networks". In: Proceedings of the Workshop on Rule Extraction From Trained Artificial Neural Networks (AISB96).
1996b
- (Andrews & Geva, 1996) ⇒ Robert Andrews, and Shlomo Geva (1996). “Rulex And Cebp Networks As The Basis For A Rule Refinement System". Technical report, Neurocomputing Research Centre, Faculty of Information Technology, Queensland University of Technology.
1996c
- (Duch et al., 1996) ⇒ Wlodzislaw Duch, Rafal Adamczak, and Krzysztof Grabczewski (1996). "Constrained Backpropagation For Feature Selection And Extraction Of Logical Rules"
- QUOTE: We have presented here a simple method of rule extraction based on the standard backpropagation technique with modified error function. Crisp logical rules are found automatically by analyzing nodes of trained networks. The method automatically forms the local response units, similarly to the Constrained Error Backpropagation (CEBP) and RULEX algorithm of Andres and Geva (1994), who use the local response units and constraints the output weights. In contrast to their approach our network has separate subnetworks, the L-units, devoted to linguistic variable extraction, and by enforcing zero and ±1 weights takes the simplest possible structure.
1995
- (Andrews & Geva, 1995) ⇒ Robert Andrews, and Shlomo Geva (1995). "RULEX & CEBP Networks As the Basis for a Rule Refinement System". Hybrid problems, hybrid solutions, 27, 1.
- QUOTE: This network has been extensively described in [7, 8, 1], so only a brief description is presented here. The network is essentially a multi layer perceptron with the hidden layer being comprised of a set of sigmoid based locally responsive units (LRUs), that perform function approximation and classification in a manner similar to radial basis function (RBF) networks. That is, a LRU will produce appreciable output only if a data point falls entirely within the boundaries of the responsive area of the LRU.
1994a
- (Geva & Sitte) ⇒ Shlomo Geva, and Joaquin Sitte (1994). “Constrained Gradient Descent". In: Proceedings of Fifth Australian Conference on Neural Computing.
1994b
- (Andrews & Geva, 1994) ⇒ Robert Andrews, and Shlomo Geva (1994). “Rule Extraction From a Constrained Error Backpropagation Network". In: Proceedings of the 5th ACNN.
1994c
- (Andrews & Geva, 1994) ⇒ Robert Andrews, and Shlomo Geva (1994). “Rule Extraction From A Constraint Back Propagation MLP".
1992
- (Geva & Sitte, 1992) ⇒ Shlomo Geva, and Joaquin Sitte (1992). “A Constructive Method for Multivariate Function Approximation by Multilayer Perceptrons". In: IEEE Transactions on Neural Networks, 3(4), 621-624. DOI: 10.1109/72.143376.
- QUOTE: Mathematical theorems establish the existence of feedforward multilayered neural networks, based on neurons with sigmoidal transfer functions, that approximate arbitrarily well any continuous multivariate function. However, these theorems do not provide any hint on how to find the network parameters in practice. It is shown how to construct a perceptron with two hidden layers for multivariate function approximation. Such a network can perform function approximation in the same manner as networks based on Gaussian potential functions, by linear combination of local functions.