Learning Rate Annealing Schedule Algorithm
Jump to navigation
Jump to search
A Learning Rate Annealing Schedule is a Learning Rate Schedule that is based on simulated annealing.
- Example(s):
- ...
- Counter-Example(s):
- See: Learning Rate, Hyperparameter, Gradient Descend Algorithm.
References
2012
- (Zeiler, 2012) ⇒ Matthew D. Zeiler. (2012). “ADADELTA: An Adaptive Learning Rate Method.” In: e-print arXiv:1212.5701.
- QUOTE: There have been several attempts to use heuristics for estimating a good learning rate at each iteration of gradient descent. These either attempt to speed up learning when suitable or to slow down learning near a local minima. Here we consider the latter.
When gradient descent nears a minima in the cost surface, the parameter values can oscillate back and forth around the minima. One method to prevent this is to slow down the parameter updates by decreasing the learning rate. This can be done manually when the validation accuracy appears to plateau. Alternatively, learning rate schedules have been proposed Robinds & Monro (1951) to automatically anneal the learning rate based on how many epochs through the data have been done. These approaches typically add additional hyperparameters to control how quickly the learning rate decays.
- QUOTE: There have been several attempts to use heuristics for estimating a good learning rate at each iteration of gradient descent. These either attempt to speed up learning when suitable or to slow down learning near a local minima. Here we consider the latter.
1951
- (Robinds & Monro, 1951) ⇒ H. Robinds and S. Monro (1951). “A stochastic approximation method”. In: Annals of Mathematical Statistics, vol. 22, pp. 400-407, 1951.