Sinusoidal Activation Function
(Redirected from Sinusoid-based Activation Function)
Jump to navigation
Jump to search
A Sinusoidal Activation Function is a neuron activation function that is based on the sine function, [math]\displaystyle{ f(x)=\sin(x) }[/math].
- Context:
- It can (typically) be used in activation of Sinusoid Neuron in Fourier Neural Networks.
- Example(s):
- Counter-Example(s):
- a Softmax-based Activation Function,
- a Rectified-based Activation Function,
- a Heaviside Step Activation Function,
- a Ramp Function-based Activation Function,
- a Logistic Sigmoid-based Activation Function,
- a Hyperbolic Tangent-based Activation Function,
- a Gaussian-based Activation Function,
- a Softsign Activation Function,
- a Softshrink Activation Function,
- an Adaptive Piecewise Linear Activation Function,
- a Maxout Activation Function,
- a Long Short-Term Memory Unit-based Activation Function,
- a Bent Identity Activation Function,
- an Inverse Square Root Unit-based Activation Function,
- a Soft Exponential Activation Function.
- See: Artificial Neural Network, Artificial Neuron, Neural Network Topology, Neural Network Layer, Neural Network Learning Rate.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions Retrieved:2018-2-18.
- The following table compares the properties of several activation functions that are functions of one fold from the previous layer or layers:
Name | Plot | Equation | Derivative (with respect to x) | Range | Order of continuity | Monotonic | Derivative Monotonic | Approximates identity near the origin |
---|---|---|---|---|---|---|---|---|
(...) | (...) | (...) | (...) | (...) | (...) | (...) | (...) | (...) |
SoftExponential [1] | [math]\displaystyle{ f(\alpha,x) = \begin{cases} -\frac{\ln(1-\alpha (x + \alpha))}{\alpha} & \text{for } \alpha \lt 0\\ x & \text{for } \alpha = 0\\ \frac{e^{\alpha x} - 1}{\alpha} + \alpha & \text{for } \alpha \gt 0\end{cases} }[/math] | [math]\displaystyle{ f'(\alpha,x) = \begin{cases} \frac{1}{1-\alpha (\alpha + x)} & \text{for } \alpha \lt 0\\ e^{\alpha x} & \text{for } \alpha \ge 0\end{cases} }[/math] | [math]\displaystyle{ (-\infty,\infty) }[/math] | [math]\displaystyle{ C^\infty }[/math] | Yes | Yes | Template:Depends | |
Sinusoid[2] | [math]\displaystyle{ f(x)=\sin(x) }[/math] | [math]\displaystyle{ f'(x)=\cos(x) }[/math] | [math]\displaystyle{ [-1,1] }[/math] | [math]\displaystyle{ C^\infty }[/math] | No | No | Yes | |
Sinc | [math]\displaystyle{ f(x)=\begin{cases} 1 & \text{for } x = 0\\ \frac{\sin(x)}{x} & \text{for } x \ne 0\end{cases} }[/math] | [math]\displaystyle{ f'(x)=\begin{cases} 0 & \text{for } x = 0\\ \frac{\cos(x)}{x} - \frac{\sin(x)}{x^2} & \text{for } x \ne 0\end{cases} }[/math] | [math]\displaystyle{ [\approx-.217234,1] }[/math] | [math]\displaystyle{ C^\infty }[/math] | No | No | No | |
Gaussian | [math]\displaystyle{ f(x)=e^{-x^2} }[/math] | [math]\displaystyle{ f'(x)=-2xe^{-x^2} }[/math] | [math]\displaystyle{ (0,1] }[/math] | [math]\displaystyle{ C^\infty }[/math] | No | No | No |
Here, H is the Heaviside step function.
α is a stochastic variable sampled from a uniform distribution at training time and fixed to the expectation value of the distribution at test time.
2014
- (Gashler & Ashmore, 2014) ⇒ Gashler, M. S., & Ashmore, S. C. (2014, August). Training deep fourier neural networks to fit time-series data. In: Proceedings of The International Conference on Intelligent Computing (pp. 48-55). Springer, Cham. arXiv: 1405.2262. DOI: 10.1007/978-3-319-09330-7_7
- ABSTRACT: We present a method for training a deep neural network containing sinusoidal activation functions to fit to time-series data. Weights are initialized using a fast Fourier transform, then trained with regularization to improve generalization. A simple dynamic parameter tuning method is employed to adjust both the learning rate and regularization term, such that stability and efficient training are both achieved. We show how deeper layers can be utilized to model the observed sequence using a sparser set of sinusoid units, and how non-uniform regularization can improve generalization by promoting the shifting of weight toward simpler units. The method is demonstrated with time-series problems to show that it leads to effective extrapolation of nonlinear trends.
- ↑ Godfrey, Luke B.; Gashler, Michael S. (2016-02-03). "A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks". 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management: KDIR 1602: 481–486. arXiv:1602.01321. Bibcode 2016arXiv160201321G.
- ↑ Gashler, Michael S.; Ashmore, Stephen C. (2014-05-09). “Training Deep Fourier Neural Networks To Fit Time-Series Data". arXiv:1405.2262 Freely accessible cs.NE.