Backport of HADOOP-3886 from trunk. svn merge -c 1360222 ../../trunk (harsh)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1360224 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Harsh J 2012-07-11 15:07:34 +00:00
parent 267a83df9f
commit 91472c5539
4 changed files with 6 additions and 3 deletions

View File

@ -143,6 +143,9 @@ Release 2.0.1-alpha - UNRELEASED
HADOOP-8586. Fixup a bunch of SPNEGO misspellings. (eli)
HADOOP-3886. Error in javadoc of Reporter, Mapper and Progressable
(Jingguo Yao via harsh)
BREAKDOWN OF HDFS-3042 SUBTASKS
HADOOP-8220. ZKFailoverController doesn't handle failure to become active

View File

@ -26,7 +26,7 @@
*
* <p>Clients and/or applications can use the provided <code>Progressable</code>
* to explicitly report progress to the Hadoop framework. This is especially
* important for operations which take an insignificant amount of time since,
* important for operations which take significant amount of time since,
* in-lieu of the reported progress, the framework has to assume that an error
* has occured and time-out the operation.</p>
*/

View File

@ -144,7 +144,7 @@ public interface Mapper<K1, V1, K2, V2> extends JobConfigurable, Closeable {
*
* <p>Applications can use the {@link Reporter} provided to report progress
* or just indicate that they are alive. In scenarios where the application
* takes an insignificant amount of time to process individual key/value
* takes significant amount of time to process individual key/value
* pairs, this is crucial since the framework might assume that the task has
* timed-out and kill that task. The other way of avoiding this is to set
* <a href="{@docRoot}/../mapred-default.html#mapreduce.task.timeout">

View File

@ -29,7 +29,7 @@
*
* <p>{@link Mapper} and {@link Reducer} can use the <code>Reporter</code>
* provided to report progress or just indicate that they are alive. In
* scenarios where the application takes an insignificant amount of time to
* scenarios where the application takes significant amount of time to
* process individual key/value pairs, this is crucial since the framework
* might assume that the task has timed-out and kill that task.
*