Momentum Gradient Descent (MGD)
A Momentum Gradient Descent (MGD) is a Gradient Descent-based Learning Algorithm that is based on Nesterov Momentum method.
- AKA: MGD, Momentum Gradient Descent Algorithm, Momentum Optimizer.
- Example(s):
chainer.optimizers.MomentumSGD()
-Chainer's implmentation,tflearn.optimizers.Momentum ()
- TFLearn's implementation,Momentum Gradient Descent (MGD)
- CRAN gradDescent Repository,RMSProp
Deeplearning4j example,- …
- Counter-Example(s):
- an Accelerated Gradient Descent (AGD),
- an Adaptive Gradient Algorithm (AdaGrad),
- an Adaptive Learning Rate Algorithm (AdaDelta),
- an Adaptive Moment Estimation Algorithm (Adam),
- a Mini-Batch Gradient Descent Algorithm (MBGD).
- a Root Mean Square Propagation Algorithm (RMSprop),
- a SGD-based Algorithm such as:
- See: Stochastic Optimization, Convex Optimization, Learning Rate, Gradient Descent, Outer Product, Hadamard Matrix Product, Euclidean Norm, Proximal Function, Rprop, Force, David Rumelhart, Geoffrey Hinton, Ronald J. Williams, Linear Combination, Parametric Statistics, Estimator, Momentum, Stochastic Gradient Descent.
References
2018a
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Stochastic_gradient_descent#Momentum Retrieved:2018-4-29.
- Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' seminal paper on backpropagation learning.[1] Stochastic gradient descent with momentum remembers the update Δ w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:[2] [3]
[math]\displaystyle{ \Delta w := \alpha \Delta w - \eta \nabla Q_i(w) }[/math]
[math]\displaystyle{ w := w + \Delta w }[/math]
that leads to:
[math]\displaystyle{ w := w - \eta \nabla Q_i(w) + \alpha \Delta w }[/math] where the parameter [math]\displaystyle{ w }[/math] which minimizes [math]\displaystyle{ Q(w) }[/math] is to be estimated, and [math]\displaystyle{ \eta }[/math] is a step size (sometimes called the learning rate in machine learning).
The name momentum stems from an analogy to momentum in physics: the weight vector [math]\displaystyle{ w }[/math], thought of as a particle traveling through parameter space, incurs acceleration from the gradient of the loss ("force"). Unlike in classical stochastic gradient descent, it tends to keep traveling in the same direction, preventing oscillations. Momentum has been used successfullyfor several decades.[4]
- Further proposals include the momentum method, which appeared in Rumelhart, Hinton and Williams' seminal paper on backpropagation learning.[1] Stochastic gradient descent with momentum remembers the update Δ w at each iteration, and determines the next update as a linear combination of the gradient and the previous update:[2] [3]
2018b
- (Wijaya et al., 2018) ⇒ Galih Praja Wijaya, Dendi Handian, Imam Fachmi Nasrulloh, Lala Septem Riza, Rani Megasari, Enjun Junaeti (2018), "gradDescent: Gradient Descent for Regression Tasks", "Reference manual (PDF).
- QUOTE: An implementation of various learning algorithms based on Gradient Descent for dealing with regression tasks. The variants of gradient descent algorithm are: Mini-Batch Gradient Descent (MBGD), which is an optimization to use training data partially to reduce the computation load. Stochastic Gradient Descent (SGD), which is an optimization to use a random data in learning to reduce the computation load drastically. Stochastic Average Gradient (SAG), which is a SGD-based algorithm to minimize stochastic step to average. Momentum Gradient Descent (MGD), which is an optimization to speed-up gradient descent learning. Accelerated Gradient Descent (AGD), which is an optimization to accelerate gradient descent learning. Adagrad, which is a gradient-descent-based algorithm that accumulate previous cost to do adaptive learning. Adadelta, which is a gradient-descent-based algorithm that use hessian approximation to do adaptive learning. RMSprop, which is a gradient-descent-based algorithm that combine Adagrad and Adadelta adaptive learning ability. Adam, which is a gradient-descent-based algorithm that mean and variance moment to do adaptive learning. Stochastic Variance Reduce Gradient (SVRG), which is an optimization SGD-based algorithm to accelerates the process toward converging by reducing the gradient. Semi Stochastic Gradient Descent (SSGD),which is a SGD-based algorithm that combine GD and SGD to accelerates the process toward converging by choosing one of the gradients at a time. Stochastic Recursive Gradient Algorithm (SARAH), which is an optimization algorithm similarly SVRG to accelerates the process toward converging by accumulated stochastic information. Stochastic Recursive Gradient Algorithm+ (SARAHPlus), which is a SARAH practical variant algorithm to accelerates the process toward converging provides a possibility of earlier termination.
2018c
- (DL4J, 2018) ⇒ https://deeplearning4j.org/updater#momentum Retrieved: 2018-04-29.
- QUOTE: To stop the zig-zagging, we use momentum. Momentum applies its knowledge from previous steps to where the updater should go. To represent it, we use a new hyperparameter[math]\displaystyle{ \mu }[/math], or “mu”.
We’ll use the concept of momentum again later. (Don’t confuse it with moment, of which more below.)
- QUOTE: To stop the zig-zagging, we use momentum. Momentum applies its knowledge from previous steps to where the updater should go. To represent it, we use a new hyperparameter[math]\displaystyle{ \mu }[/math], or “mu”.
- ↑ Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (8 October 1986). “Learning representations by back-propagating errors". Nature. 323 (6088): 533–536. doi:10.1038/323533a0.
- ↑ Sutskever, Ilya; Martens, James; Dahl, George; Hinton, Geoffrey E. (June 2013). Sanjoy Dasgupta and David Mcallester, ed. On the importance of initialization and momentum in deep learning (PDF). In: Proceedings of the 30th International Conference on Machine Learning (ICML-13). 28. Atlanta, GA. pp. 1139–1147. Retrieved 14 January 2016.
- ↑ Sutskever, Ilya (2013). Training recurrent neural networks (PDF) (Ph.D.). University of Toronto. p. 74.
- ↑ Zeiler, Matthew D. (2012). “ADADELTA: An adaptive learning rate method". arXiv:1212.5701 Freely accessible.