Softmin Activation Function
Jump to navigation
Jump to search
A Softmin Activation Function is a Softmax-based Activation Function that is defined as [math]\displaystyle{ f(x)=softmax(-x) }[/math].
- Context:
- It can (typically) be used in the activation of Softmin Neurons.
- Example(s):
- torch.nn.LogSoftmin,
- ...
- …
- Counter-Example(s):
- a LogSoftmax Activation Function,
- a Rectified-based Activation Function,
- a Heaviside Step Activation Function,
- a Ramp Function-based Activation Function,
- a Logistic Sigmoid-based Activation Function,
- a Hyperbolic Tangent-based Activation Function,
- a Gaussian-based Activation Function,
- a Softsign Activation Function,
- a Softshrink Activation Function,
- a Adaptive Piecewise Linear Activation Function,
- a Bent Identity Activation Function,
- a Maxout Activation Function.
- See: Softmax Regression, Softmax Function, Artificial Neural Network, Artificial Neuron, Neural Network Topology, Neural Network Layer, Neural Network Learning Rate.
References
2018
- (Pyttorch, 2018) ⇒ http://pytorch.org/docs/master/nn.html#softmin
- QUOTE:
class torch.nn.Softmin(dim=None)
sourceApplies the Softmin function to an n-dimensional input Tensor rescaling them so that the elements of the n-dimensional output Tensor lie in the range (0, 1) and sum to 1
[math]\displaystyle{ f_i(x)=\dfrac{\exp(-x_i)}{\sum_j\exp(-x_j)} }[/math]
Shape:
*** Input: any shape
- Output: same as input
- QUOTE:
- Parameters: dim(int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
.Returns: a Tensor of the same dimension and shape as the input, with values in the range [0, 1].
Examples:
- Parameters: dim(int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1).
>>> m = nn.Softmin() >>> input = autograd.Variable(torch.randn(2, 3)) >>> print(input) >>> print(m(input))