Neural Network Learnable Parameter
(Redirected from neural network parameter)
Jump to navigation
Jump to search
A Neural Network Learnable Parameter is a Model Parameter that can be learned by training neural network.
- AKA: Learnable Parameter, Trainable Parameter.
- Context:
- It can be learned by using gradient based method.
- It corresponds to the sum of neural network weight size and number of bias neurons.
- Example(s):
- PReLU$(x)=max(0,x)+\alpha∗min(0,x)$, where $\alpha$ is a Neural Network Learnable Parameter.
- …
- Counter-Example(s):
- See: Artificial Neural Network, Training Set, Parameter Tuning, Robustness.
References
2020
- (PyTorch, 2020) ⇒ https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html 2020-11-15.
QUOTE: $ \operatorname{PReLU}(\mathrm{x})=\left\{\begin{array}{ll} \mathrm{x}, & \text { if } \mathrm{x} \geq 0 \\ \mathrm{ax}, & \text { otherwise } \end{array}\right. $
- Here $a$ is a learnable parameter.
2017
- (See et al., 2017) ⇒ Abigail See, Peter J. Liu, and Christopher D. Manning. (2017). “Get To The Point: Summarization with Pointer-Generator Networks.” In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). DOI:10.18653/v1/P17-1099.
- QUOTE: The attention distribution at is calculated as in Bahdanau et al. (2015):
[math]\displaystyle{ \begin{align} e^t_i &= \nu^T \mathrm{tanh}\left(W_hh_i +W_sS_t +b_{attn}\right) \\ a^t &= \mathrm{softmax}\left(e^t \right) \end{align} }[/math] | (1) |
(2) |
- where $\nu$, $W_h$, $W_s$ and $b_{attn}$ are learnable parameters.