Gradient Descent-based Optimization System
A Gradient Descent-based Optimization System is a optimization system that can solve a Gradient Descent-based Optimization task (by implementing a Gradient Descent-based Optimization algorithm).
References
2014
- https://spark.apache.org/docs/1.1.0/mllib-optimization.html
- QUOTE: The simplest method to solve optimization problems of the form [math]\displaystyle{ min_w∈Rdf(w) }[/math] is gradient descent. Such first-order optimization methods (including gradient descent and stochastic variants thereof) are well-suited for large-scale and distributed computation.
Gradient descent methods aim to find a local minimum of a function by iteratively taking steps in the direction of steepest descent, which is the negative of the derivative (called the gradient) of the function at the current point, i.e., at the current parameter value. If the objective function [math]\displaystyle{ f }[/math] is not differentiable at all arguments, but still convex, then a sub-gradient is the natural generalization of the gradient, and assumes the role of the step direction. In any case, computing a gradient or sub-gradient of f is expensive — it requires a full pass through the complete dataset, in order to compute the contributions from all loss terms.
- QUOTE: The simplest method to solve optimization problems of the form [math]\displaystyle{ min_w∈Rdf(w) }[/math] is gradient descent. Such first-order optimization methods (including gradient descent and stochastic variants thereof) are well-suited for large-scale and distributed computation.