Parametric Rectified Linear Neuron
(Redirected from Parametric Rectified Linear Unit (PReLU))
Jump to navigation
Jump to search
A Parametric Rectified Linear Neuron is an Rectified Linear Neuron in which the leakage coefficient is also a neural network model parameter.
- AKA: PReLU.
- Example(s):
- [math]\displaystyle{ f(\alpha, x) = \begin{cases} \alpha x, & \mbox{for } x \lt 0 \\ x, & \mbox{for } x \geq 0 \end{cases} }[/math]
where $\alpha$ is the coefficient of leakage.
- [math]\displaystyle{ f(\alpha, x) = \begin{cases} \alpha x, & \mbox{for } x \lt 0 \\ x, & \mbox{for } x \geq 0 \end{cases} }[/math]
- Counter-Example(s):
- a Leaky Rectified Linear Neuron,
- a Randomized Leaky Rectified Linear Neuron,
- a S-shaped Rectified Linear Neuron,
- an Adaptive Linear Neuron,
- a Scaled Exponential Linear Neuron,
- a Exponential Linear Neuron,
- a Bent Identity Neuron.
- a Hyperbolic Tangent Neuron,
- a Sigmoid Neuron,
- a Heaviside Step Neuron,
- a Stochastic Binary Neuron,
- a SoftPlus Neuron,
- a Softmax Neuron.
- See: Artificial Neural Network, Perceptron, Linear Neuron.
References
2017
- (Mate Labs, 2017) ⇒ Mate Labs Aug 23, 2017. Secret Sauce behind the beauty of Deep Learning: Beginners guide to Activation Functions
- QUOTE: Parametric Rectified Linear Unit(PReLU) — It makes the coefficient of leakage into a parameter that is learned along with the other neural network parameters. Alpha(α) is the coefficient of leakage here.
For [math]\displaystyle{ \alpha\leq 1 \quad f(x) = max(x, \alpha x) }[/math]
Range:[math]\displaystyle{ (-\infty, +\infty) }[/math]
[math]\displaystyle{ f(\alpha, x) = \begin{cases} \alpha x, & \mbox{for } x \lt 0 \\ x, & \mbox{for } x \geq 0 \end{cases} }[/math]
- QUOTE: Parametric Rectified Linear Unit(PReLU) — It makes the coefficient of leakage into a parameter that is learned along with the other neural network parameters. Alpha(α) is the coefficient of leakage here.