Semi-Stochastic Gradient Descent (SSGD) Algorithm
Jump to navigation
Jump to search
A Semi-Stochastic Gradient Descent (SSGD) Algorithm is a gradient descent combined with a stochastic gradient descent algorithm.
- AKA: S2GD Algorithm.
- Context:
- It combined the methods of reducing the cost of computing a gradient and that of the variance of the stochastic gradients.
- Example(s):
- Counter-Example(s):
- See: Optimization Algorithm, Greedy Search Algorithm, Hill-Climbing Algorithm, Exhaustive Search Algorithm, Learning as Search, Rule Learning.
References
2017
- (Konecny & Richtarik, 2017) ⇒ Jakub Konecny, and Peter Richtarik (2017). "Semi-Stochastic Gradient Descent Methods". Frontiers in Applied Mathematics and Statistics, 3, 9.
- QUOTE: In order to improve upon GD, one needs to reduce the cost of computing a gradient. In order to improve upon SGD, one has to reduce the variance of the stochastic gradients. In this paper we propose and analyze a Semi-Stochastic Gradient Descent (S2GD) method. Our method combines GD and SGD steps and reaps the benefits of both algorithms: it inherits the stability and speed of GD and at the same time retains the work-efficiency of SGD.
2016
- (Liu & Takac, 2016) ⇒ Jie Liu, and Martin Takac (2016, August). "Projected Semi-Stochastic Gradient Descent Method with Mini-batch Scheme Under Weak Strong Convexity Assumption". In: Modeling and Optimization: Theory and Applications (pp. 95-117). Springer, Cham.
- QUOTE: We propose a projected semi-stochastic gradient descent method with mini-batch for improving both the theoretical complexity and practical performance of the general stochastic gradient descent method (SGD). We are able to prove linear convergence under weak strong convexity assumption. This requires no strong convexity assumption for minimizing the sum of smooth convex functions subject to a compact polyhedral set, which remains popular across machine learning community. Our PS2GD preserves the low-cost per iteration and high optimization accuracy via stochastic gradient variance-reduced technique, and admits a simple parallel implementation with mini-batches.
2013
- (Konecny & Richtarik) ⇒ (2013)."Semi-Stochastic Gradient Descent Methods". Preprint arXiv:1312.1666.
- QUOTE: In order to improve upon GD, one needs to reduce the cost of computing a gradient. In order to improve upon SGD, one has to reduce the variance of the stochastic gradients. In this paper we propose and analyze a Semi-Stochastic Gradient Descent (S2GD) method. Our method combines GD and SGD steps and reaps the benefits of both algorithms: it inherits the stability and speed of GD and at the same time retains the work-efficiency of SGD.