Hadoop Map/Reduce Framework
Jump to navigation
Jump to search
A Hadoop Map/Reduce Framework is a map/reduce framework within an Apache Hadoop software framework.
- Context:
- It can support a Hadoop Map/Reduce Job.
- It can (typically) interact with Hadoop HDFS.
- …
- Example(s)
- Counter-Example(s):
- See: Hadoop MRv2, MapReduce Job.
References
2015
- http://ci.apache.org/projects/flink/flink-docs-release-0.8.1/faq.html#is-flink-a-hadoop-project
- Flink is a data processing system and an alternative to Hadoop’s MapReduce component. It comes with its own runtime, rather than building on top of MapReduce. As such, it can work completely independently of the Hadoop ecosystem. However, Flink can also access Hadoop’s distributed file system (HDFS) to read and write data, and Hadoop’s next-generation resource manager (YARN) to provision cluster resources. Since most Flink users are using Hadoop HDFS to store their data, we ship already the required libraries to access HDFS.