Hadoop MapReduce Job
Jump to navigation
Jump to search
A Hadoop MapReduce Job is a MapReduce Job that runs on a Hadoop System.
See: Google MapReduce Job, Scalding Library, Apache Pig Program.
References
2012
- http://hadoop.apache.org/common/docs/r0.20.2/mapred_tutorial.html#Overview
- Hadoop Map/Reduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.