Polynomial Time Complexity (PTIME) Measure
A Polynomial Time Complexity (PTIME) Measure is a Time Complexity Performance Measure that is defined as: $T(n) = 2^{\left(log (n)\right)} = poly(n)$.
- Context:
- It can be evaluated by a Polynomial Time Task.
- Example(s):
- …
- Counter-Example(s):
- a Constant Time Complexity Measure.
- a Cubic Time Complexity Measure.
- an Exponential Time Complexity Measure.
- a Factorial Time Complexity Measure.
- a Linear Time Complexity Measure.
- a Logarithmic Time Complexity Measure.
- a Polylogarithmic Time Complexity Measure.
- a Quadratic Time Complexity Measure.
- a Turing Machine Time Complexity (DTIME) Measure.
- a Space Complexity.
- …
- See: Time Hierarchy Theorem, Bit, Constant Factor, Worst-Case Complexity, Average-Case Complexity, Mathematical Function, Asymptotic Analysis, Big O Notation.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Time_complexity Retrieved:2018-4-1.
- In computer science, the time complexity is the computational complexity that measures or estimates the time taken for running an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that an elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor.
Since an algorithm's running time may vary with different inputs of the same size, one commonly considers the worst-case time complexity, which is the maximum amount of time taken on inputs of a given size. Less common, and usually specified explicitly, is the average-case complexity, which is the average of the time taken on inputs of a given size (this makes sense, as there is only a finite number of possible inputs of a given size).
In both cases, the time complexity is generally expressed as a function of the size of the input[1] Since this function is generally difficult to compute exactly, and the running time is usually not critical for small input, one focuses commonly on the behavior of the complexity when the input size increases; that is, on the asymptotic behavior of the complexity. Therefore, the time complexity is commonly expressed using big O notation, typically [math]\displaystyle{ O(n), }[/math] [math]\displaystyle{ O(n\log n), }[/math] [math]\displaystyle{ O(n^\alpha), }[/math] [math]\displaystyle{ O(2^n), }[/math] etc., where is the input size measured by the number of bits needed for representing it.
Algorithm complexities are classified by the function appearing in the big O notation. For example, an algorithm with time complexity [math]\displaystyle{ O(n) }[/math] is a linear time algorithm, an algorithm with time complexity [math]\displaystyle{ O(n^\alpha) }[/math] for some constant [math]\displaystyle{ \alpha \ge 1 }[/math] is a polynomial time algorithm.
- In computer science, the time complexity is the computational complexity that measures or estimates the time taken for running an algorithm. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that an elementary operation takes a fixed amount of time to perform. Thus, the amount of time taken and the number of elementary operations performed by the algorithm differ by at most a constant factor.
- ↑ Sipser, Michael (2006). Introduction to the Theory of Computation. Course Technology Inc. ISBN 0-619-21764-2.
2003
- (Hromkovic, 2003) ⇒ Juraj Hromkovic (2003). "Theoretical computer science: introduction to Automata, computability, complexity, algorithmics, randomization, communication, and cryptography". In: Springer Science & Business Media.
- QUOTE: In contrast to the computability theory that provides a well-developed methodology for classifying problems into algorithmically solvable and algorithmically unsolvable, one has been unable to develop any successful mathematical method for classifying algorithmic problems with respect to practical solvability (to the membership to P) in the complexity theory. Sufficiently powerful techniques for proving lower bounds on the complexity of concrete problems are missing. The following fact shows how far we are from proving that a concrete problem from NP cannot be solved on polynomial time. The highest known lower bound on the time complexity of multitape Turing machines for solving a concrete problem from NP is the trivial lower bound[1]$\Omega(n)$ (i.e., we are unable to prove a lower bound $\Omega(n \cdot log\;n)$ for a problem from NP), though the best known algorithms for thousands of problems in NP run in exponential time. Hence, our experience lets us believe that $\Omega(2^n)$ is a lower bound on the time complexity of many problems, but we are unable to prove a higher lower bound than $\Omega(n)$ for them.