2004 MapReduceSimplifiedDataProc
Jump to navigation
Jump to search
- (Dean & Ghemawat, 2004b) ⇒ Jeffrey Dean, Sanjay Ghemawat. (2004). “MapReduce: Simplified Data Processing on Large Clusters.” In: Proceedings of the 6th Conference on Symposium on Operating Systems Design & Implementation (OSDI 2004).
Subject Heading(s):
Notes
- http://labs.google.com/papers/mapreduce.html
- Presentation Slides: http://labs.google.com/papers/mapreduce-osdi04-slides/index.html
- (Dean & Ghemawat, 2004a) ⇒ Jeffrey Dean, and Sanjay Ghemawat. (2004). “System and method for efficient large-scale data processing.” US Patent 7,650,331
Cited By
Quotes
Abstract
- MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.
,