Cache Data Structure
A Cache Data Structure is a data structure that can temporarily hold and retrieve main memory contents that are predicted to be accessed again in the near future.
- AKA: Cache Memory.
- Context:
- It must support a Fetch Operation.
- It can support a Prefetch Operation.
- It can range from (typically) being a Positive-only Cache to also being a Negative-also Cache[1].
- It can range from being a Single Machine Cache to being a Distributed Cache (w/ cache coherence behavior).
- It can be managed by a Cache Algorithm.
- It can (typically) be used to speed-up future Data Requests (and the execution rate of a software system).
- It can be based on a Cache Memory Abstract Data Type.
- It can be managed by a Caching System, such as a memcached.
- …
- Example(s):
- Counter-Example(s):
- a Queuing Data Structure (based on a Queuing ADT).
- an Indexing Data Structure.
- a Predictive Model Data Structure.
- See: Hash Function.
References
2013
- http://en.wikipedia.org/wiki/Cache_%28computing%29
- In computer science, a cache ( /ˈkæʃ/ Template:Respell)[1] is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.
To be cost efficient and to enable an efficient use of data, caches are relatively small. Nevertheless, caches have proven themselves in many areas of computing because access patterns in typical computer applications have locality of reference. References exhibit temporal locality if data is requested again that has been recently requested already. References exhibit spatial locality if data is requested that is physically stored close to data that has been requested already.
- In computer science, a cache ( /ˈkæʃ/ Template:Respell)[1] is a component that transparently stores data so that future requests for that data can be served faster. The data that is stored within a cache might be values that have been computed earlier or duplicates of original values that are stored elsewhere. If requested data is contained in the cache (cache hit), this request can be served by simply reading the cache, which is comparatively faster. Otherwise (cache miss), the data has to be recomputed or fetched from its original storage location, which is comparatively slower. Hence, the greater the number of requests that can be served from the cache, the faster the overall system performance becomes.
- ↑ "Cache". Merriam-Webster Online Dictionary. Merriam-Webster, Incorporated. http://www.merriam-webster.com/dictionary/cache. Retrieved 2 May 2011.
1995
- (Su & Despain, 1995) ⇒ Ching-Long Su, and Alvin M. Despain. (1995). “Cache design trade-offs for power and performance optimization: a case study.” In: Proceedings of the 1995 international symposium on Low power design. http://dl.acm.org/citation.cfm?id=224093
- (Wulf & McKee, 1995) ⇒ Wm A. Wulf, and Sally A. McKee. (1995). “Hitting the memory wall: implications of the obvious.” In: ACM SIGARCH computer architecture news 23, no. 1. http://dl.acm.org/citation.cfm?id=216588
1982
- (Smith, 1982) ⇒ Alan Jay Smith. (1982). “Cache Memories.” In: ACM Computing Surveys (CSUR) Journal, 14(3). doi:10.1145/356887.356892
- QUOTE: Cache memories are used in modern, medium and high-speed CPUs to hold temporarily those portions of the contents of main memory which are {believed to be) currently in use. Since instructions and data in cache memories can usually be referenced in 10 to 25 percent of the time required to access main memory, cache memories permit the execution rate of the machine to be substantially increased. In order to function effectively, cache memories must be carefully designed and implemented. In this paper, we explain the various aspects of cache memories and discuss in some detail the design features and trade-offs. A large number of original, trace-driven simulation results are presented. Consideration is given to practical implementation questions as well as to more abstract design issues.
Specific aspects of cache memories that are investigated include: the cache fetch algorithm (demand versus prefetch), the placement and replacement algorithms, line size, store-through versus copy-back updating of main memory, cold-start versus warm-start miss ratios, multicache consistency, the effect of input/output through the cache, the behavior of split data/instruction caches, and cache size. Our discussion includes other aspects of memory system architecture, including translation lookaside buffers.
- QUOTE: Cache memories are used in modern, medium and high-speed CPUs to hold temporarily those portions of the contents of main memory which are {believed to be) currently in use. Since instructions and data in cache memories can usually be referenced in 10 to 25 percent of the time required to access main memory, cache memories permit the execution rate of the machine to be substantially increased. In order to function effectively, cache memories must be carefully designed and implemented. In this paper, we explain the various aspects of cache memories and discuss in some detail the design features and trade-offs. A large number of original, trace-driven simulation results are presented. Consideration is given to practical implementation questions as well as to more abstract design issues.