Huang's Law
Jump to navigation
Jump to search
A Huang's Law is a technological observation that predicts rapid advancements in GPU performance beyond the traditional scaling of Moore's Law.
- Context:
- It can (typically) indicate Performance Doubling or tripling on an annual basis.
- It can (often) drive AI Acceleration and machine learning advancements.
- It can (often) guide GPU Architecture evolution and development strategy.
- ...
- It can range from being a Simple Performance Metric to being a Complex System Assessment, based on its measurement scope.
- It can range from being a Hardware Advancement to being a Software Optimization, depending on its improvement source.
- ...
- It can incorporate Architectural Innovations for performance gains.
- It can leverage Software Ecosystem improvements.
- It can enable Real-Time Graphics advancement.
- It can accelerate Scientific Computing capabilities.
- It can support AI Training scalability.
- It can influence Hardware Design strategies.
- ...
- Examples:
- Performance Advancements, such as:
- NVIDIA GPU (2016-2021) showing 25x improvement.
- AI Processing Units achieving annual doubling.
- Graphics Engines enabling ray tracing advancement.
- ...
- "Huang's Law is an observation in computer science and engineering that advancements in graphics processing units (GPUs) are growing at a rate much faster than with traditional central processing units (CPUs)."
- "Huang's Law means that the speedups we have seen in 'single chip inference performance' aren’t now going to peter out but will keep on coming."
- "Huang's Law forecasts that the performance of graphics processing units (GPUs), particularly in AI applications, will more than double every two years."
- "Huang's Law is the new Moore's Law, and explains why Nvidia wants Arm."
- "Huang's Law states that the performance of GPUs will more than double every two years."
- ...
- Performance Advancements, such as:
- Counter-Example(s):
- Moore's Law, which focuses on transistor density rather than system performance.
- Dennard Scaling, which addresses power efficiency rather than computational capability.
- Amdahl's Law, which describes parallel speedup limits rather than performance scaling.
- Gustafson's Law, which relates to parallel computation scaling rather than GPU advancements.
- See: Performance Scaling, GPU Architecture, Artificial Intelligence, Hardware Innovation, Parallel Computing, Graphics Processing.