S-shaped Rectified Linear Activation Function
Jump to navigation
Jump to search
A S-shaped Rectified Linear Activation Function is a rectified-based activation function that is based on a S-shaped function.
- AKA: SReLU.
- Context:
- It can (typically) be used in the activation of SReLU Neurons.
- Example(s):
- …
- Counter-Example(s):
- an Exponential Linear Activation Function,
- a Leaky Rectified Linear Activation Function,
- a Noisy Rectified Linear Activation Function,
- a Parametric Rectified Linear Activation Function,
- a Randomized Leaky Rectified Linear Activation Function,
- a Scaled Exponential Linear Activation Function,
- a Softplus Activation Function.
- See: Artificial Neural Network, Artificial Neuron, Neural Network Topology, Neural Network Layer, Neural Network Learning Rate.
References
2017
- (Mate Labs, 2017) ⇒ Mate Labs Aug 23, 2017. Secret Sauce behind the beauty of Deep Learning: Beginners guide to Activation Functions
- QUOTE: S-shaped Rectified Linear Activation Unit (SReLU)
Range: [math]\displaystyle{ (-\infty,+\infty) }[/math]
[math]\displaystyle{ f_{t_l,a_l,t_r,a_r}(x) = \begin{cases} t_l+a_l(x-t_l) & \mbox{for } x \le t_l \\ x & \mbox{for } t_l\lt x \lt t_r\\ t_r+a_r(x-t_r) & \mbox{for } x \ge t_r\end{cases} }[/math]
with [math]\displaystyle{ t_l,\; a_l, \;t_r,\; a_r }[/math] are parameters.
- QUOTE: S-shaped Rectified Linear Activation Unit (SReLU)
2016
- (Jin et al., 2016) ⇒ Jin, X., Xu, C., Feng, J., Wei, Y., Xiong, J., & Yan, S. (2016, February). Deep Learning with S-Shaped Rectified Linear Activation Units. In: Proceedings of AAAI(pp. 1737-1743).
- ABSTRACT: Rectified linear activation units are important components for state-of-the-art deep convolutional networks. In this paper, we propose a novel S-shaped rectified linear activation unit (SReLU) to learn both convex and non-convex functions, imitating the multiple function forms given by the two fundamental laws, namely the Webner-Fechner law and the Stevens law, in psychophysics and neural sciences. Specifically, SReLU consists of three piecewise linear functions, which are formulated by four learnable parameters. The SReLU is learned jointly with the training of the whole deep network through back propagation. During the training phase, to initialize SReLU in different layers, we propose a “freezing” method to degenerate SReLU into a predefined leaky rectified linear unit in the initial several training epochs and then adaptively learn the good initial values. SReLU can be universally used in the existing deep networks with negligible additional parameters and computation cost. Experiments with two popular CNN architectures, Network in Network and GoogLeNet on scale-various benchmarks including CIFAR10, CIFAR100, MNIST and ImageNet demonstrate that SReLU achieves remarkable improvement compared to other activation functions.