LogSoftmax Activation Function
Jump to navigation
Jump to search
A LogSoftmax Activation Function is a Softmax-based Activation Function that is the logarithm of a Softmax Function, i.e.:
$LogSoftmax\left(x_i\right)=\log\left(\dfrac{\exp\left(x_i\right)}{\sum_j\exp\left(x_j\right)}\right)$
- Context:
- It can (typically) be used in the activation of LogSoftmax Neurons.
- Example(s):
torch.nn.LogSoftmax
- PyTorch's implementation,- …
- Counter-Example(s):
- a Rectified-based Activation Function,
- a Heaviside Step Activation Function,
- a Ramp Function-based Activation Function,
- a Logistic Sigmoid-based Activation Function,
- a Hyperbolic Tangent-based Activation Function,
- a Gaussian-based Activation Function,
- a Softmin Activation Function,
- a Softsign Activation Function,
- a Softshrink Activation Function,
- a Adaptive Piecewise Linear Activation Function,
- a Bent Identity Activation Function,
- a Maxout Activation Function.
- See: Softmax Regression, Softmax Function, Artificial Neural Network, Artificial Neuron, Neural Network Topology, Neural Network Layer, Neural Network Learning Rate.
References
2018
- (Pyttorch, 2018) ⇒ http://pytorch.org/docs/master/nn.html#logsoftmax
- QUOTE:
class torch.nn.LogSoftmax(dim=None)
sourceApplies the Log(Softmax(x)) function to an n-dimensional input Tensor. The LogSoftmax formulation can be simplified as
[math]\displaystyle{ f_i(x)=\log\left(\dfrac{\exp(x_i)}{\sum_j\exp(x_j)}\right) }[/math]
Shape:
*** Input: any shape
- Output: same as input
- QUOTE:
- Parameters: dim(int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1)
.Returns: a Tensor of the same dimension and shape as the input with values in the range [-inf, 0).
Examples:
- Parameters: dim(int) – A dimension along which Softmax will be computed (so every slice along dim will sum to 1)
>>> m = nn.LogSoftmax() >>> input = autograd.Variable(torch.randn(2, 3)) >>> print(input) >>> print(m(input))