Instructions per Second Measure
(Redirected from Instructions per Second)
Jump to navigation
Jump to search
An Instructions per Second Measure is a computing performance measure based on computing instructions per second.
- AKA: IPS.
- Example(s):
- Counter-Example(s):
- See: Computer Performance, Memory Hierarchy, Benchmark (Computing), SPECint.
References
2015
- (Wikipedia, 2015) ⇒ http://en.wikipedia.org/wiki/Instructions_per_second Retrieved:2015-5-12.
- Instructions per second (IPS) is a measure of a computer's processor speed. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads typically lead to significantly lower IPS values. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, synthetic benchmarks such as SPECint are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.
The term is commonly used in association with a numeric value such as thousand instructions per second (kIPS), million instructions per second (MIPS), Giga instructions per second (GIPS), or million operations per second (MOPS).
- Instructions per second (IPS) is a measure of a computer's processor speed. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches, whereas realistic workloads typically lead to significantly lower IPS values. The performance of the memory hierarchy also greatly affects processor performance, an issue barely considered in MIPS calculations. Because of these problems, synthetic benchmarks such as SPECint are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.