Amdahl's Law

From GM-RKB
Jump to navigation Jump to search

A Amdahl's Law is a theoretical model that predicts maximum speedup in parallel systems based on the sequential fraction of a computation.



References

2011

  • http://en.wikipedia.org/wiki/Amdahl%27s_law
    • Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.

      The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.

      … Amdahl's law states that the overall speedup of applying the improvement will be: :[math]\displaystyle{ \frac{1}{(1 - P) + \frac{P}{S}}. }[/math]

  • (Shavit, 2011) ⇒ Nir Shavit. (2011). “Data Structures in the Multicore Age.” In: Communications of the ACM, 54(3). doi:10.1145/1897852.1897873
    • QUOTE: … The formula we need in order to answer this question is called Amdahl's Law. It captures the idea that the extent to which we can speed up any complex computation is limited by how much of the computation must be executed sequentially. … Define the speedup S of a computation to be the ratio between the time it takes one processor to complete the computation (as measured by a wall clock) versus the time it takes n concurrent processors to complete the same computation. Amdahl's Law characterizes the maximum speedup S that can be achieved by n processors collaborating on an application, where p is the fraction of the computation that can be executed in parallel.