Amdahl's Law
A Amdahl's Law is a theoretical model that predicts maximum speedup in parallel systems based on the sequential fraction of a computation.
- Context:
- It can (typically) calculate Performance Limits in parallel computing.
- It can (often) guide System Design decisions for parallelization.
- It can (often) inform Resource Allocation in multiprocessor systems.
- ...
- It can range from being a Simple Speedup Calculator to being a Complex System Model, based on its application scope.
- It can range from being a Program Analysis Tool to being a System Design Guide, depending on its usage context.
- ...
- It can express Speedup Factors using the formula 1/((1-P) + P/S).
- It can identify Parallelization Bottlenecks in system design.
- It can evaluate Cost Effectiveness of adding processors.
- It can predict Performance Scaling limits.
- It can guide Optimization Strategy decisions.
- It can influence Hardware Investment choices.
- ...
- Examples:
- Theoretical Applications, such as:
- Program Analysis showing speedup limits with parallel portions.
- System Design evaluating processor count impact.
- Resource Planning for computing clusters.
- Practical Usages, such as:
- Database Systems optimizing query parallelization.
- Scientific Computing balancing resource allocation.
- Cloud Computing planning instance scaling.
- ...
- Theoretical Applications, such as:
- Counter-Example(s):
- Gustafson's Law, which considers scalable workloads rather than fixed problem sizes.
- Little's Law, which describes queue behavior rather than parallel performance.
- Universal Scalability Law, which includes communication overheads beyond Amdahl's basic model.
- Moore's Law, which predicts transistor density growth rather than parallel efficiency.
- See: Parallel Computing, Performance Analysis, System Architecture, Scalability Theory, Processor Utilization, Computing Efficiency.
References
2011
- http://en.wikipedia.org/wiki/Amdahl%27s_law
- Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of 1 hour cannot be parallelized, while the remaining promising portion of 19 hours (95%) can be parallelized, then regardless of how many processors we devote to a parallelized execution of this program, the minimum execution time cannot be less than that critical 1 hour. Hence the speedup is limited up to 20×, as the diagram illustrates.
… Amdahl's law states that the overall speedup of applying the improvement will be: :[math]\displaystyle{ \frac{1}{(1 - P) + \frac{P}{S}}. }[/math]
- Amdahl's law, also known as Amdahl's argument, is named after computer architect Gene Amdahl, and is used to find the maximum expected improvement to an overall system when only part of the system is improved. It is often used in parallel computing to predict the theoretical maximum speedup using multiple processors.
- (Shavit, 2011) ⇒ Nir Shavit. (2011). “Data Structures in the Multicore Age.” In: Communications of the ACM, 54(3). doi:10.1145/1897852.1897873
- QUOTE: … The formula we need in order to answer this question is called Amdahl's Law. It captures the idea that the extent to which we can speed up any complex computation is limited by how much of the computation must be executed sequentially. … Define the speedup S of a computation to be the ratio between the time it takes one processor to complete the computation (as measured by a wall clock) versus the time it takes n concurrent processors to complete the same computation. Amdahl's Law characterizes the maximum speedup S that can be achieved by n processors collaborating on an application, where p is the fraction of the computation that can be executed in parallel.