Rectifier-based Neural Network
A Rectifier-based Neural Network is an artificial neural network composed of rectifier linear artificial neurons (that apply a rectifier activation function).
- AKA: ReLUs.
- Context:
- It can model Animal Neural Networks.
- It can range from being a Noisy ReLU Network (of noisy ReLUs) to being a ...
- It can range from being a Leaky ReLU Network (of leaky ReLUs) to being a ...
- Example(s):
- …
- Counter-Example(s):
- See: Rectifier Linear Unit.
References
2017
- (Wikipedia, 2017) ⇒ https://en.wikipedia.org/wiki/Rectifier_(neural_networks) Retrieved:2017-5-22.
- In the context of artificial neural networks, the rectifier is an activation function defined as: [math]\displaystyle{ f(x) = \max(0, x) }[/math] ,
where x is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering.
This activation function was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature with strong biological motivations and mathematical justifications. It has been used in convolutional networks more effectively than the widely used logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is,, the most popular activation function for deep neural networks. A unit employing the rectifier is also called a rectified linear unit (ReLU). A smooth approximation to the rectifier is the analytic function : [math]\displaystyle{ f(x) = \ln(1 + e^x), }[/math] which is called the softplus function. [1] The derivative of softplus is [math]\displaystyle{ f'(x) = e^x / (e^x + 1) = 1 / (1 + e^{-x}) }[/math] , i.e. the logistic function. Rectified linear units find applications in computer vision and speech recognition [2] using deep neural nets.
- In the context of artificial neural networks, the rectifier is an activation function defined as: [math]\displaystyle{ f(x) = \max(0, x) }[/math] ,
2016
- (Wikipedia, 2016) ⇒ https://en.wikipedia.org/wiki/Rectifier_(neural_networks) Retrieved:2016-8-1.
- In the context of artificial neural networks, the rectifier is an activation function defined as : [math]\displaystyle{ f(x) = \max(0, x), }[/math] where x is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering. This activation function has been argued to be more biologically plausible
than the widely used logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical counterpart, the hyperbolic tangent. The rectifier is, the most popular activation function for deep neural networks. A unit employing the rectifier is also called a rectified linear unit (ReLU). A smooth approximation to the rectifier is the analytic function : [math]\displaystyle{ f(x) = \ln(1 + e^x), }[/math] which is called the softplus function. [3] The derivative of softplus is [math]\displaystyle{ f'(x) = e^x / (e^x + 1) = 1 / (1 + e^{-x}) }[/math] , i.e. the logistic function. Rectified linear units find applications in computer vision
and speech recognition [2] using deep neural nets.
- In the context of artificial neural networks, the rectifier is an activation function defined as : [math]\displaystyle{ f(x) = \max(0, x), }[/math] where x is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering. This activation function has been argued to be more biologically plausible
2010
- (Nair & Hinton, 2010) ⇒ Vinod Nair, and Geoffrey E. Hinton. (2010). “Rectified Linear Units Improve Restricted Boltzmann Machines.” In: Proceedings of the 27th International Conference on Machine Learning (ICML-10).
- QUOTE: Restricted Boltzmann machines were developed using binary stochastic hidden units. These can be generalized by replacing each binary unit by an infinite number of copies that all have the same weights but have progressively more negative biases. The learning and inference rules for these “Stepped Sigmoid Units" are unchanged. They can be approximated efficiently by noisy, rectified linear units. Compared with binary units, these units learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset. Unlike binary units, rectified linear units preserve information about relative intensities as information travels through multiple layers of feature detectors.
- ↑ C. Dugas, Y. Bengio, F. Bélisle, C. Nadeau, R. Garcia, NIPS'2000, (2001),Incorporating Second Order Functional Knowledge for Better Option Pricing.
- ↑ 2.0 2.1 Andrew L. Maas, Awni Y. Hannun, Andrew Y. Ng (2014). Rectifier Nonlinearities Improve Neural Network Acoustic Models
- ↑ C. Dugas, Y. Bengio, F. Bélisle, C. Nadeau, R. Garcia, NIPS'2000, (2001),Incorporating Second-Order Functional Knowledge for Better Option Pricing.