Intractable Computational Task
An Intractable Computational Task is a computational task that lacks a polynomial-time solutions for more than the smallest inputs.
- AKA: Intractable Problem.
- Context:
- It can (typically) be solved by a Computationally Expensive Algorithm.
- Example(s):
- Counter-Example(s):
- See: Computational Complexity Theory, Computability Theory, Theory of Computation, Theoretical Computer Science, Computational Problems, Complexity Class, Algorithm, Models of Computation, Computational Complexity, Communication Complexity, Logic Gate, Circuit Complexity, Parallel Computing, .
References
2018a
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Computational_complexity_theory#Intractability Retrieved:2018-7-21.
- A problem that can be solved in theory (e.g. given large but finite resources, especially time), but for which in practice any solution takes too many resources to be useful, is known as an intractable problem. [1] Conversely, a problem that can be solved in practice is called a tractable problem, literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization.
Tractable problems are frequently identified with problems that have polynomial-time solutions (P, PTIME); this is known as the Cobham–Edmonds thesis. Problems that are known to be intractable in this sense include those that are EXPTIME-hard. If NP is not the same as P, then NP-hard problems are also intractable in this sense.
However, this identification is inexact: a polynomial-time solution with large exponent or large constant term grows quickly, and may be impractical for practical size problems; conversely, an exponential-time solution that grows slowly may be practical on realistic input, or a solution that takes a long time in the worst case may take a short time in most cases or the average case, and thus still be practical. Saying that a problem is not in P does not imply that all large cases of the problem are hard or even that most of them are. For example, the decision problem in Presburger arithmetic has been shown not to be in P, yet algorithms have been written that solve the problem in reasonable times in most cases. Similarly, algorithms can solve the NP-complete knapsack problem over a wide range of sizes in less than quadratic time and SAT solvers routinely handle large instances of the NP-complete Boolean satisfiability problem.
To see why exponential-time algorithms are generally unusable in practice, consider a program that makes 2n operations before halting. For small n, say 100, and assuming for the sake of example that the computer does 1012 operations each second, the program would run for about 4 × 1010 years, which is the same order of magnitude as the age of the universe. Even with a much faster computer, the program would only be useful for very small instances and in that sense the intractability of a problem is somewhat independent of technological progress. However, an exponential-time algorithm that takes 1.0001n operations is practical until n gets relatively large.
Similarly, a polynomial time algorithm is not always practical. If its running time is, say, n15, it is unreasonable to consider it efficient and it is still useless except on small instances. Indeed, in practice even n3 or n2 algorithms are often impractical on realistic sizes of problems.
- A problem that can be solved in theory (e.g. given large but finite resources, especially time), but for which in practice any solution takes too many resources to be useful, is known as an intractable problem. [1] Conversely, a problem that can be solved in practice is called a tractable problem, literally "a problem that can be handled". The term infeasible (literally "cannot be done") is sometimes used interchangeably with intractable, though this risks confusion with a feasible solution in mathematical optimization.
2018b
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Combinatorial_explosion Retrieved:2018-7-21.
- In mathematics, a combinatorial explosion is the rapid growth of the complexity of a problem due to how the combinatorics of the problem is affected by the input, constraints, and bounds of the problem. Combinatorial explosion is sometimes used to justify the intractability of certain problems.[2] [3] Examples of such problems include certain mathematical functions, the analysis of some puzzles and games, and some pathological examples which can be modelled as the Ackermann function.
2018c
- (Wikibooks, 2018) ⇒ https://en.wikibooks.org/wiki/Foundations_of_Computer_Science/Limits_of_Computing#Intractable_Problems Retrieved:2018-7-21.
- QUOTE: Halting problem is hard because it is not solvable algorithmically even in principle. There are other hard problems that are solvable in principle but in practice they are close to being impossible to solve. As you can see we can categorize problems by the performance of the best known algorithms. If a problem can be solved using a fast algorithm, the problem is easy because we can use a computer to solve it fast. On the contrary if the best known algorithm we know takes a long time to solve a problem, it is hard because computers cannot solve it fast.
Using algorithm complexity theory we can put each algorithm into a particular category according to the algorithm's complexity. If the big-O notation of a category contains only polynomial terms, the problems solvable using algorithms in this category are called P problems (Polynomial time solutions exist), such as [math]\displaystyle{ O(1) }[/math], [math]\displaystyle{ log_2(N) }[/math][math]\displaystyle{ O(N) }[/math], and [math]\displaystyle{ O(N^2) }[/math]. The P problems are the easy problems to computers. Among the problems without a polynomial time solution there are problems that if we can guess the solution it can be verified in polynomial time. For example, the integer factorization (or prime decomposition) problem has no known polynomial time solution but given an answer we can verify it very quickly by simply multiplying the factors and comparing the result to the integer. These types of problems are called NP (Non-deterministic Polynomial) problems.
Collectively we call problems that take too long to solve intractable problems, which include problems with best algorithm in exponential time ([math]\displaystyle{ O(2^N) }[/math]) or those with polynomial time solutions but the exponent is too larger, e.g. [math]\displaystyle{ O(N^{15}) }[/math].
If a problem's best algorithmic solution is in the [math]\displaystyle{ O(2^N) }[/math], when [math]\displaystyle{ N=100 }[/math] and a computer does [math]\displaystyle{ 10^{12} }[/math] operations per second it would take [math]\displaystyle{ 4 \times 10^{10} }[/math] years (the age of the universe) to solve the problem on the computer.
- QUOTE: Halting problem is hard because it is not solvable algorithmically even in principle. There are other hard problems that are solvable in principle but in practice they are close to being impossible to solve. As you can see we can categorize problems by the performance of the best known algorithms. If a problem can be solved using a fast algorithm, the problem is easy because we can use a computer to solve it fast. On the contrary if the best known algorithm we know takes a long time to solve a problem, it is hard because computers cannot solve it fast.
2018d
- (Sedgewick & Wayne, 2018) ⇒ Robert Sedgewick and Kevin Wayne (2000-2018). https://algs4.cs.princeton.edu/66intractability/ Retrieved:2018-7-21.
- QUOTE: Polynomial time. We have analyzed the running time of an algorithm as a function of its input size. When solving a given problem, we prefer an algorithm that takes 8 N log N steps to one that takes 3 N2 steps, since when N is large, the first algorithm is significantly faster than the first. The second algorithm will ultimately solve the same problem (but it might take hours instead of seconds). In contrast, an exponential time algorithm has a different qualitative behavior. For example, a brute force algorithm for the TSPmight take N! steps. Even if each electron in the universe (1079) had the power of today's fastest supercomputer (1012 instructions per second), and each worked for the life of the universe (1017 seconds) on solving the problem, it would barely make a dent in solving a problem with N = 1,000 since 1000! >> 101000 >> 1079 * 1012 * 1017. Exponential growth dwarfs technological change. We refer to any algorithm whose running time is bounded by a polynomial in the input size (e.g., N log N or N2) as a polynomial-time algorithm. We say that a problem is intractable if there is no polynomial-time algorithm for the problem.
- ↑ Hopcroft, J.E., Motwani, R. and Ullman, J.D. (2007) Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Boston/San Francisco/New York (page 368)
- ↑ Krippendorff, Klaus. "Combinatorial Explosion". Web Dictionary of Cybernetics and Systems. PRINCIPIA CYBERNETICA WEB. Retrieved 29 November 2010.
- ↑ http://intelligence.worldofcomputing/combinatorial-explosion Combinatorial Explosion.