Continuous Optimization Algorithm
(Redirected from continuous metric optimization algorithm)
Jump to navigation
Jump to search
An Continuous Optimization Algorithm is an optimization algorithm that can be applied by a continuous optimization system (to solve a continuous optimization task).
- Context:
- It can range from being a Combinatorial Optimization Algorithm to being a Cost Function Optimization Algorithm.
- It can range from being a Global Optimization Algorithm to being a Local Optimization Algorithm.
- It can range from being an Offline Optimization Algorithm to being an Online Optimization Algorithm.
- It can range from being an Exact Optimization Algorithm to being an Approximation Algorithm, depending on the task's optimality guarantees.
- It can range from being a Single-Variable Optimization Algorith (SVO) to being a Multi-Variable Optimization Algorithm (MVO).
- ...
- Example(s):
- Exact Optimization Algorithms:
- Newton's Method, which uses both first and second derivatives to find the zeros of a function, thus identifying critical points for optimization.
- Conjugate Gradient Method, ideal for large-scale problems involving a quadratic objective function without constraints.
- Simplex Algorithm for linear programming, which iteratively moves towards the best vertex on the polytope of feasible solutions.
- Approximate Optimization Algorithms:
- Stochastic Gradient Descent (SGD), which approximates the true gradient by considering a subset of the data at each step, making it suitable for large datasets.
- Genetic Algorithms, which use techniques inspired by evolutionary biology such as mutation, crossover, and selection.
- Simulated Annealing, a probabilistic technique that avoids being trapped in local optima by allowing less optimal moves according to a cooling schedule.
- …
- Exact Optimization Algorithms:
- Counter-Example(s):
- Integer Programming Algorithms, which deal with discrete variables and are not suitable for continuous domains.
- Constraint Satisfaction Algorithms, used for finding items or configurations that meet a series of constraints but do not optimize a continuous function.
- See: Monte Carlo Algorithm, Combinatorial Optimization Algorithm.
References
2009
- http://en.wikipedia.org/wiki/Optimization_%28mathematics%29#Computational_optimization_techniques
- Crudely all the methods are divided according to variables called:-
- For twice-differentiable functions, unconstrained problems can be solved by finding the points where the gradient of the objective function is zero (that is, the stationary points) and using the Hessian matrix to classify the type of each point. If the Hessian is positive definite, the point is a local minimum, if negative definite, a local maximum, and if indefinite it is some kind of saddle point.
- The existence of derivatives is not always assumed and many methods were devised for specific situations. The basic classes of methods, based on smoothness of the objective function, are:
- Actual methods falling somewhere among the categories above include:
- Bundle methods.
- Conjugate gradient method.
- Ellipsoid method.
- Frank–Wolfe method.
- Gradient descent aka steepest descent or steepest ascent
- Interior point methods.
- Line search - a technique for one dimensional optimization, usually used as a subroutine for other, more general techniques.
- Nelder-Mead method aka the Amoeba method
- Newton's method.
- Quasi-Newton methods.
- Simplex method.
- Subgradient method - similar to gradient method in case there are no gradients
- Should the objective function be convex over the region of interest, then any local minimum will also be a global minimum. There exist robust, fast numerical techniques for optimizing twice differentiable convex functions.
- Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers.
- Here are a few other popular methods:
- Crudely all the methods are divided according to variables called:-
1999
- (Nocedal & Wright, 1999) ⇒ Jorge Nocedal, and Stephen J. Wright. (1999). “Numerical Optimization." Springer, ISBN:0387987932.