HADOOP-4687. Fix some of the remaining javadoc warnings.

git-svn-id: https://svn.apache.org/repos/asf/hadoop/core/branches/HADOOP-4687/core@780833 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Owen O'Malley 2009-06-01 21:10:16 +00:00
parent bb957bd2cd
commit 5c7b7adacb
4 changed files with 9 additions and 21 deletions

View File

@ -34,8 +34,8 @@ import java.net.URI;
* framework to cache files (text, archives, jars etc.) needed by applications. * framework to cache files (text, archives, jars etc.) needed by applications.
* </p> * </p>
* *
* <p>Applications specify the files, via urls (hdfs:// or http://) to be cached * <p>Applications specify the files, via urls (hdfs:// or http://) to be
* via the {@link org.apache.hadoop.mapred.JobConf}. * cached via the org.apache.hadoop.mapred.JobConf.
* The <code>DistributedCache</code> assumes that the * The <code>DistributedCache</code> assumes that the
* files specified via hdfs:// urls are already present on the * files specified via hdfs:// urls are already present on the
* {@link FileSystem} at the path specified by the url.</p> * {@link FileSystem} at the path specified by the url.</p>
@ -82,8 +82,8 @@ import java.net.URI;
* DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz", job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz", job);
* *
* 3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper} * 3. Use the cached files in the org.apache.hadoop.mapred.Mapper
* or {@link org.apache.hadoop.mapred.Reducer}: * or org.apache.hadoop.mapred.Reducer:
* *
* public static class MapClass extends MapReduceBase * public static class MapClass extends MapReduceBase
* implements Mapper&lt;K, V, K, V&gt; { * implements Mapper&lt;K, V, K, V&gt; {
@ -109,8 +109,6 @@ import java.net.URI;
* *
* </pre></blockquote></p> * </pre></blockquote></p>
* *
* @see org.apache.hadoop.mapred.JobConf
* @see org.apache.hadoop.mapred.JobClient
*/ */
public class DistributedCache { public class DistributedCache {
// cacheID to cacheStatus mapping // cacheID to cacheStatus mapping

View File

@ -221,9 +221,6 @@ public class SequenceFile {
* Get the compression type for the reduce outputs * Get the compression type for the reduce outputs
* @param job the job config to look in * @param job the job config to look in
* @return the kind of compression to use * @return the kind of compression to use
* @deprecated Use
* {@link org.apache.hadoop.mapred.SequenceFileOutputFormat#getOutputCompressionType(org.apache.hadoop.mapred.JobConf)}
* to get {@link CompressionType} for job-outputs.
*/ */
@Deprecated @Deprecated
static public CompressionType getCompressionType(Configuration job) { static public CompressionType getCompressionType(Configuration job) {
@ -236,11 +233,6 @@ public class SequenceFile {
* Set the compression type for sequence files. * Set the compression type for sequence files.
* @param job the configuration to modify * @param job the configuration to modify
* @param val the new compression type (none, block, record) * @param val the new compression type (none, block, record)
* @deprecated Use the one of the many SequenceFile.createWriter methods to specify
* the {@link CompressionType} while creating the {@link SequenceFile} or
* {@link org.apache.hadoop.mapred.SequenceFileOutputFormat#setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType)}
* to specify the {@link CompressionType} for job-outputs.
* or
*/ */
@Deprecated @Deprecated
static public void setCompressionType(Configuration job, static public void setCompressionType(Configuration job,

View File

@ -58,10 +58,8 @@ abstract public class Shell {
/** /**
* Get the Unix command for setting the maximum virtual memory available * Get the Unix command for setting the maximum virtual memory available
* to a given child process. This is only relevant when we are forking a * to a given child process. This is only relevant when we are forking a
* process from within the {@link org.apache.hadoop.mapred.Mapper} or the * process from within the Mapper or the Reducer implementations.
* {@link org.apache.hadoop.mapred.Reducer} implementations * see also Hadoop Pipes and Streaming.
* e.g. <a href="{@docRoot}/org/apache/hadoop/mapred/pipes/package-summary.html">Hadoop Pipes</a>
* or <a href="{@docRoot}/org/apache/hadoop/streaming/package-summary.html">Hadoop Streaming</a>.
* *
* It also checks to ensure that we are running on a *nix platform else * It also checks to ensure that we are running on a *nix platform else
* (e.g. in Cygwin/Windows) it returns <code>null</code>. * (e.g. in Cygwin/Windows) it returns <code>null</code>.

View File

@ -24,9 +24,9 @@
Hadoop is a distributed computing platform. Hadoop is a distributed computing platform.
<p>Hadoop primarily consists of the <a <p>Hadoop primarily consists of the <a
href="org/apache/hadoop/hdfs/package-summary.html">Hadoop Distributed FileSystem href="http://hadoop.apache.org/hdfs/">Hadoop Distributed FileSystem
(HDFS)</a> and an (HDFS)</a> and an
implementation of the <a href="org/apache/hadoop/mapred/package-summary.html"> implementation of the <a href="http://hadoop.apache.org/mapreduce/">
Map-Reduce</a> programming paradigm.</p> Map-Reduce</a> programming paradigm.</p>
@ -153,7 +153,7 @@ specified with the configuration property <tt><a
href="../core-default.html#fs.default.name">fs.default.name</a></tt>. href="../core-default.html#fs.default.name">fs.default.name</a></tt>.
</li> </li>
<li>The {@link org.apache.hadoop.mapred.JobTracker} (MapReduce master) <li>The org.apache.hadoop.mapred.JobTracker (MapReduce master)
host and port. This is specified with the configuration property host and port. This is specified with the configuration property
<tt><a <tt><a
href="../mapred-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>. href="../mapred-default.html#mapred.job.tracker">mapred.job.tracker</a></tt>.