HADOOP-9616. Fix branch-2 javadoc warnings. (Junping Du via llu)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1489951 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Luke Lu 2013-06-05 16:49:04 +00:00
parent ee792cf084
commit d77a2579f2
6 changed files with 18 additions and 18 deletions

View File

@ -18,4 +18,4 @@
OK_RELEASEAUDIT_WARNINGS=0 OK_RELEASEAUDIT_WARNINGS=0
OK_FINDBUGS_WARNINGS=0 OK_FINDBUGS_WARNINGS=0
OK_JAVADOC_WARNINGS=13 OK_JAVADOC_WARNINGS=16

View File

@ -17,7 +17,6 @@
*/ */
/** /**
* Command-line tools associated with the {@link org.apache.hadoop.mapred} * Command-line tools associated with the org.apache.hadoop.mapred package.
* package.
*/ */
package org.apache.hadoop.mapred.tools; package org.apache.hadoop.mapred.tools;

View File

@ -27,7 +27,7 @@ import org.apache.hadoop.mapreduce.v2.hs.JobHistory;
/** /**
* {@link JobHistoryParser} that parses {@link JobHistory} files produced by * {@link JobHistoryParser} that parses {@link JobHistory} files produced by
* {@link org.apache.hadoop.mapreduce.jobhistory.JobHistory} in the same source * {@link org.apache.hadoop.mapreduce.v2.hs.JobHistory} in the same source
* code tree as rumen. * code tree as rumen.
*/ */
public class CurrentJHParser implements JobHistoryParser { public class CurrentJHParser implements JobHistoryParser {

View File

@ -170,7 +170,7 @@ public class LoggedTaskAttempt implements DeepCompare {
/** /**
* *
* @returns a list of all splits vectors, ordered in enumeral order * @return a list of all splits vectors, ordered in enumeral order
* within {@link SplitVectorKind} . Do NOT use hard-coded * within {@link SplitVectorKind} . Do NOT use hard-coded
* indices within the return for this with a hard-coded * indices within the return for this with a hard-coded
* index to get individual values; use * index to get individual values; use

View File

@ -43,10 +43,11 @@ import org.apache.log4j.Logger;
* across versions. {@link MapReduceJobPropertiesParser} is a utility class that * across versions. {@link MapReduceJobPropertiesParser} is a utility class that
* parses MapReduce job configuration properties and converts the value into a * parses MapReduce job configuration properties and converts the value into a
* well defined {@link DataType}. Users can use the * well defined {@link DataType}. Users can use the
* {@link MapReduceJobPropertiesParser#parseJobProperty()} API to process job * {@link MapReduceJobPropertiesParser#parseJobProperty(String, String)} API to
* configuration parameters. This API will parse a job property represented as a * process job configuration parameters. This API will parse a job property
* key-value pair and return the value wrapped inside a {@link DataType}. * represented as a key-value pair and return the value wrapped inside a
* Callers can then use the returned {@link DataType} for further processing. * {@link DataType}. Callers can then use the returned {@link DataType} for
* further processing.
* *
* {@link MapReduceJobPropertiesParser} thrives on the key name to decide which * {@link MapReduceJobPropertiesParser} thrives on the key name to decide which
* {@link DataType} to wrap the value with. Values for keys representing * {@link DataType} to wrap the value with. Values for keys representing
@ -61,14 +62,14 @@ import org.apache.log4j.Logger;
* {@link DefaultDataType}. Currently only '-Xmx' and '-Xms' settings are * {@link DefaultDataType}. Currently only '-Xmx' and '-Xms' settings are
* considered while the rest are ignored. * considered while the rest are ignored.
* *
* Note that the {@link MapReduceJobPropertiesParser#parseJobProperty()} API * Note that the {@link MapReduceJobPropertiesParser#parseJobProperty(String,
* maps the keys to a configuration parameter listed in * String)} API maps the keys to a configuration parameter listed in
* {@link MRJobConfig}. This not only filters non-framework specific keys thus * {@link MRJobConfig}. This not only filters non-framework specific keys thus
* ignoring user-specific and hard-to-parse keys but also provides a consistent * ignoring user-specific and hard-to-parse keys but also provides a consistent
* view for all possible inputs. So if users invoke the * view for all possible inputs. So if users invoke the
* {@link MapReduceJobPropertiesParser#parseJobProperty()} API with either * {@link MapReduceJobPropertiesParser#parseJobProperty(String, String)} API
* <"mapreduce.job.user.name", "bob"> or <"user.name", "bob">, then the result * with either <"mapreduce.job.user.name", "bob"> or <"user.name", "bob">, then
* would be a {@link UserName} {@link DataType} wrapping the user-name "bob". * the result would be a {@link UserName} {@link DataType} wrapping the user-name "bob".
*/ */
@SuppressWarnings("deprecation") @SuppressWarnings("deprecation")
public class MapReduceJobPropertiesParser implements JobPropertyParser { public class MapReduceJobPropertiesParser implements JobPropertyParser {

View File

@ -181,8 +181,8 @@
* <li> * <li>
* {@link org.apache.hadoop.tools.rumen.JobBuilder}<br> * {@link org.apache.hadoop.tools.rumen.JobBuilder}<br>
* Summarizes a job history file. * Summarizes a job history file.
* {@link org.apache.hadoop.tools.rumen.TraceBuilder} provides * {@link org.apache.hadoop.tools.rumen.JobHistoryUtils} provides
* {@link org.apache.hadoop.tools.rumen.TraceBuilder#extractJobID(String)} * {@link org.apache.hadoop.tools.rumen.JobHistoryUtils#extractJobID(String)}
* API for extracting job id from job history or job configuration files * API for extracting job id from job history or job configuration files
* which can be used for instantiating {@link org.apache.hadoop.tools.rumen.JobBuilder}. * which can be used for instantiating {@link org.apache.hadoop.tools.rumen.JobBuilder}.
* {@link org.apache.hadoop.tools.rumen.JobBuilder} generates a * {@link org.apache.hadoop.tools.rumen.JobBuilder} generates a