Approximation Algorithm
An approximation algorithm is an algorithm that can solve an approximation task.
- Context:
- It can find an approximate solution to an Optimization Task.
- It can be a Linear Approximation Algorithm or a Nonlinear Approximation Algorithm.
- It can be a Constant Factor Approximation Algorithm.
- …
- Example(s):
- Counter-Example(s):
- See: Optimization Algorithm, Local Search Algorithm, Estimation Algorithm, Randomized Algorithm, PCP Theorem, Vertex Cover Problem.
References
2016
- (Wikipedia, 2016) ⇒ http://wikipedia.org/wiki/approximation_algorithm Retrieved:2016-4-12.
- In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; since it is unlikely that there can ever be efficient polynomial-time exact algorithms solving NP-hard problems, one settles for polynomial-time sub-optimal solutions. Unlike heuristics, which usually only find reasonably good solutions reasonably fast, one wants provable solution quality and provable run-time bounds. Ideally, the approximation is optimal up to a small constant factor (for instance within 5% of the optimal solution). Approximation algorithms are increasingly being used for problems where exact polynomial-time algorithms are known but are too expensive due to the input size.
A typical example for an approximation algorithm is the one for vertex cover in graphs: find an uncovered edge and add both endpoints to the vertex cover, until none remain. It is clear that the resulting cover is at most twice as large as the optimal one. This is a constant factor approximation algorithm with a factor of 2.
NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximated within any factor greater than 1 (such a family of approximation algorithms is often called a polynomial time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial factor unless P = NP, such as the maximum clique problem.
NP-hard problems can often be expressed as integer programs (IP) and solved exactly in exponential time. Many approximation algorithms emerge from the linear programming relaxation of the integer program.
Not all approximation algorithms are suitable for all practical applications. They often use IP/LP/Semidefinite solvers, complex data structures or sophisticated algorithmic techniques which lead to difficult implementation problems. Also, some approximation algorithms have impractical running times even though they are polynomial time, for example O(n2156)
Yet the study of even very expensive algorithms is not a completely theoretical pursuit as they can yield valuable insights. A classic example is the initial PTAS for Euclidean TSP due to Sanjeev Arora which had prohibitive running time, yet within a year, Arora refined the ideas into a linear time algorithm. Such algorithms are also worthwhile in some applications where the running times and cost can be justified e.g. computational biology, financial engineering, transportation planning, and inventory management. In such scenarios, they must compete with the corresponding direct IP formulations.
Another limitation of the approach is that it applies only to optimization problems and not to "pure" decision problems like satisfiability, although it is often possible to conceive optimization versions of such problems, such as the maximum satisfiability problem (Max SAT).
Inapproximability has been a fruitful area of research in computational complexity theory since the 1990 result of Feige, Goldwasser, Lovász, Safra and Szegedy on the inapproximability of Independent Set. After Arora et al. proved the PCP theorem a year later, it has now been shown that Johnson's 1974 approximation algorithms for Max SAT, Set Cover, Independent Set and Coloring all achieve the optimal approximation ratio, assuming P != NP.
- In computer science and operations research, approximation algorithms are algorithms used to find approximate solutions to optimization problems. Approximation algorithms are often associated with NP-hard problems; since it is unlikely that there can ever be efficient polynomial-time exact algorithms solving NP-hard problems, one settles for polynomial-time sub-optimal solutions. Unlike heuristics, which usually only find reasonably good solutions reasonably fast, one wants provable solution quality and provable run-time bounds. Ideally, the approximation is optimal up to a small constant factor (for instance within 5% of the optimal solution). Approximation algorithms are increasingly being used for problems where exact polynomial-time algorithms are known but are too expensive due to the input size.
2013
- (Fomin & Kaski, 2013) ⇒ Fedor V. Fomin, and Petteri Kaski. (2013). “Exact Exponential Algorithms.” In: Communications of the ACM Journal, 56(3). doi:10.1145/2428556.2428575
- QUOTE: To cope with intractability, advanced techniques such as parameterized algorithms [10,13,31] (that isolate the exponential complexity to a specific structural parameter of a problem instance) and approximation algorithms [34] (that produce a solution whose value is guaranteed to be within a known factor of the value of an optimum solution) have been developed. But what can we say about finding exact solutions of non-parameterized instances of intractable problems? At first glance, the general case of an NP-complete problem is a formidable opponent: when faced with a problem whose instances can express arbitrary nondeterministic computation, how is one to proceed at solving a given instance, apart from the obvious exhaustive search that "tries out all the possibilities"?
2001
- (Vazirani, 2001) ⇒ Vijay V. Vazirani. (2001). “Approximation Algorithms.” Springer. ISBN:3540653678