MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document. Contributed by Kai Sasaki.
(cherry picked from commit7995a6ea4d
) (cherry picked from commit8607cb6074
)
This commit is contained in:
parent
6f30919336
commit
5f34cbff5c
|
@ -385,6 +385,9 @@ Release 2.7.3 - UNRELEASED
|
||||||
MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
|
MAPREDUCE-5883. "Total megabyte-seconds" in job counters is slightly
|
||||||
misleading (Nathan Roberts via jlowe)
|
misleading (Nathan Roberts via jlowe)
|
||||||
|
|
||||||
|
MAPREDUCE-6583. Clarify confusing sentence in MapReduce tutorial document.
|
||||||
|
(Kai Sasaki via aajisaka)
|
||||||
|
|
||||||
Release 2.7.2 - UNRELEASED
|
Release 2.7.2 - UNRELEASED
|
||||||
|
|
||||||
INCOMPATIBLE CHANGES
|
INCOMPATIBLE CHANGES
|
||||||
|
|
|
@ -311,7 +311,7 @@ public void reduce(Text key, Iterable<IntWritable> values,
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
The `Reducer` implementation, via the `reduce` method just sums up the values, which are the occurence counts for each key (i.e. words in this example).
|
The `Reducer` implementation, via the `reduce` method just sums up the values, which are the occurrence counts for each key (i.e. words in this example).
|
||||||
|
|
||||||
Thus the output of the job is:
|
Thus the output of the job is:
|
||||||
|
|
||||||
|
@ -348,7 +348,7 @@ Maps are the individual tasks that transform input records into intermediate rec
|
||||||
|
|
||||||
The Hadoop MapReduce framework spawns one map task for each `InputSplit` generated by the `InputFormat` for the job.
|
The Hadoop MapReduce framework spawns one map task for each `InputSplit` generated by the `InputFormat` for the job.
|
||||||
|
|
||||||
Overall, `Mapper` implementations are passed the `Job` for the job via the [Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) method. The framework then calls [map(WritableComparable, Writable, Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value pair in the `InputSplit` for that task. Applications can then override the `cleanup(Context)` method to perform any required cleanup.
|
Overall, mapper implementations are passed to the job via [Job.setMapperClass(Class)](../../api/org/apache/hadoop/mapreduce/Job.html) method. The framework then calls [map(WritableComparable, Writable, Context)](../../api/org/apache/hadoop/mapreduce/Mapper.html) for each key/value pair in the `InputSplit` for that task. Applications can then override the `cleanup(Context)` method to perform any required cleanup.
|
||||||
|
|
||||||
Output pairs do not need to be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to context.write(WritableComparable, Writable).
|
Output pairs do not need to be of the same types as input pairs. A given input pair may map to zero or many output pairs. Output pairs are collected with calls to context.write(WritableComparable, Writable).
|
||||||
|
|
||||||
|
@ -848,7 +848,7 @@ In the following sections we discuss how to submit a debug script with a job. Th
|
||||||
|
|
||||||
##### How to distribute the script file:
|
##### How to distribute the script file:
|
||||||
|
|
||||||
The user needs to use [DistributedCache](#DistributedCache) to *distribute* and *symlink* thescript file.
|
The user needs to use [DistributedCache](#DistributedCache) to *distribute* and *symlink* to the script file.
|
||||||
|
|
||||||
##### How to submit the script:
|
##### How to submit the script:
|
||||||
|
|
||||||
|
|
Loading…
Reference in New Issue