Run Time Complexity Evaluation Algorithm
A Run Time Complexity Evaluation Algorithm is a Pseudocode that evaluates the the worst-case scenario of a given algorithm.
- AKA: Evaluating Run-Time Complexity.
- Context:
- It can be implemented by a Time Complexity Evaluation System to solve a Time Complexity Evaluation Task.
- Example(s):
- …
- Counter-Example(s):
- See: Polynomial Time, Mathematical Induction, Reduction (Mathematics), Rule-of-Thumb, Elegance, Trivial (Mathematics), Pseudocode, DTIME, Instruction (Computer Science), Deterministic System (Mathematics), Quantum Computer, Program Loop, Iteration.
References
2018
- (Wikipedia, 2018) ⇒ https://en.wikipedia.org/wiki/Analysis_of_algorithms#Evaluating_run-time_complexity Retrieved:2018-4-1.
- The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode:
1 ''get a positive integer n from input'' 2 '''if''' n > 10 3 '''print''' "This might take a while..." 4 '''for''' i = 1 '''to''' n 5 '''for''' j = 1 '''to''' i 6 '''print''' i * j 7 '''print''' "Done!"
- A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic. [1] Say that the actions carried out in step 1 are considered to consume time T1, step 2 uses time T2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is: : [math]\displaystyle{ T_1 + T_2 + T_3 + T_7. \, }[/math] The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute (n + 1 ) times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time. Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression: : [math]\displaystyle{ T_6 + 2T_6 + 3T_6 + \cdots + (n-1) T_6 + n T_6 }[/math] which can be factored [2] as : [math]\displaystyle{ T_6 \left[ 1 + 2 + 3 + \cdots + (n-1) + n \right] = T_6 \left[ \frac{1}{2} (n^2 + n) \right] }[/math] The total time required to run the outer loop test can be evaluated similarly: : [math]\displaystyle{ \begin{align} & 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1) T_5 + n T_5 + (n + 1) T_5\\ =\ &T_5 + 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1)T_5 + nT_5 + (n+1)T_5 - T_5 \end{align} }[/math] which can be factored as : [math]\displaystyle{ \begin{align} & T_5 \left[ 1+2+3+\cdots + (n-1) + n + (n + 1) \right] - T_5 \\ =& \left[ \frac{1}{2} (n^2 + n) \right] T_5 + (n + 1)T_5 - T_5 \\ =& T_5 \left[ \frac{1}{2} (n^2 + n) \right] + n T_5 \\ =& \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 \end{align} }[/math] Therefore, the total running time for this algorithm is: : [math]\displaystyle{ f(n) = T_1 + T_2 + T_3 + T_7 + (n + 1)T_4 + \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2+3n) \right] T_5 }[/math] which reduces to : [math]\displaystyle{ f(n) = \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 }[/math] As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that f(n) = O(n2). Formally this can be proven as follows:
Prove that [math]\displaystyle{ \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2,\ n \ge n_0 }[/math]
[math]\displaystyle{ \begin{align} &\left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7\\ \le &( n^2 + n )T_6 + (n^2 + 3n )T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \ (\text{for } n \ge 0 ) \end{align} }[/math]
Let k be a constant greater than or equal to [T1..T7]
[math]\displaystyle{ \begin{align} &T_6( n^2 + n ) + T_5( n^2 + 3n ) + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le k( n^2 + n ) + k( n^2 + 3n ) + kn + 5k\\ = &2kn^2 + 5kn + 5k \le 2kn^2 + 5kn^2 + 5kn^2 \ (\text{for } n \ge 1) = 12kn^2 \end{align} }[/math]
Therefore [math]\displaystyle{ \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 \le cn^2, n \ge n_0 \text{ for } c = 12k, n_0 = 1 }[/math]
A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's running time breaks down as follows: [3]
[math]\displaystyle{ 4+\sum_{i=1}^n i\leq 4+\sum_{i=1}^n n=4+n^2\leq5n^2 \ (\text{for } n \ge 1) =O(n^2). }[/math]
- A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. The specific amount of time to carry out a given instruction will vary depending on which instruction is being executed and which computer is executing it, but on a conventional computer, this amount will be deterministic. [1] Say that the actions carried out in step 1 are considered to consume time T1, step 2 uses time T2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is: : [math]\displaystyle{ T_1 + T_2 + T_3 + T_7. \, }[/math] The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute (n + 1 ) times (note that an extra step is required to terminate the for loop, hence n + 1 and not n executions), which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time. Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression: : [math]\displaystyle{ T_6 + 2T_6 + 3T_6 + \cdots + (n-1) T_6 + n T_6 }[/math] which can be factored [2] as : [math]\displaystyle{ T_6 \left[ 1 + 2 + 3 + \cdots + (n-1) + n \right] = T_6 \left[ \frac{1}{2} (n^2 + n) \right] }[/math] The total time required to run the outer loop test can be evaluated similarly: : [math]\displaystyle{ \begin{align} & 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1) T_5 + n T_5 + (n + 1) T_5\\ =\ &T_5 + 2T_5 + 3T_5 + 4T_5 + \cdots + (n-1)T_5 + nT_5 + (n+1)T_5 - T_5 \end{align} }[/math] which can be factored as : [math]\displaystyle{ \begin{align} & T_5 \left[ 1+2+3+\cdots + (n-1) + n + (n + 1) \right] - T_5 \\ =& \left[ \frac{1}{2} (n^2 + n) \right] T_5 + (n + 1)T_5 - T_5 \\ =& T_5 \left[ \frac{1}{2} (n^2 + n) \right] + n T_5 \\ =& \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 \end{align} }[/math] Therefore, the total running time for this algorithm is: : [math]\displaystyle{ f(n) = T_1 + T_2 + T_3 + T_7 + (n + 1)T_4 + \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2+3n) \right] T_5 }[/math] which reduces to : [math]\displaystyle{ f(n) = \left[ \frac{1}{2} (n^2 + n) \right] T_6 + \left[ \frac{1}{2} (n^2 + 3n) \right] T_5 + (n + 1)T_4 + T_1 + T_2 + T_3 + T_7 }[/math] As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that f(n) = O(n2). Formally this can be proven as follows:
- ↑ However, this is not the case with a quantum computer
- ↑ It can be proven by induction that [math]\displaystyle{ 1 + 2 + 3 + \cdots + (n-1) + n = \frac{n(n+1)}{2} }[/math]
- ↑ This approach, unlike the above approach, neglects the constant time consumed by the loop tests which terminate their respective loops, but it is trivial to prove that such omission does not affect the final result