2014 EfficientMiniBatchTrainingforSt
Jump to navigation
Jump to search
- (Li et al., 2014) ⇒ Mu Li, Tong Zhang, Yuqiang Chen, and Alexander J. Smola. (2014). “Efficient Mini-batch Training for Stochastic Optimization.” In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD-2014) Journal. ISBN:978-1-4503-2956-9 doi:10.1145/2623330.2623612
Subject Headings:
Notes
Cited By
- http://scholar.google.com/scholar?q=%222014%22+Efficient+Mini-batch+Training+for+Stochastic+Optimization
- http://dl.acm.org/citation.cfm?id=2623330.2623612&preflayout=flat#citedby
Quotes
Author Keywords
Abstract
Stochastic gradient descent (SGD) is a popular technique for large-scale optimization problems in machine learning. In order to parallelize SGD, minibatch training needs to be employed to reduce the communication cost. However, an increase in minibatch size typically decreases the rate of convergence. This paper introduces a technique based on approximate optimization of a conservatively regularized objective function within each minibatch. We prove that the convergence rate does not decrease with increasing minibatch size. Experiments demonstrate that with suitable implementations of approximate optimization, the resulting algorithm can outperform standard SGD in many scenarios.
References
;
Author | volume | Date Value | title | type | journal | titleUrl | doi | note | year | |
---|---|---|---|---|---|---|---|---|---|---|
2014 EfficientMiniBatchTrainingforSt | Alexander J. Smola Tong Zhang Mu Li Yuqiang Chen | Efficient Mini-batch Training for Stochastic Optimization | 10.1145/2623330.2623612 | 2014 |