MAPREDUCE-6865. Fix typo in javadoc for DistributedCache. Contributed by Attila Sasvari.

(cherry picked from commit 34a931c0fa)
This commit is contained in:
Naganarasimha 2017-03-20 00:04:23 +05:30
parent 5d7197b2e6
commit 0cfa6ec6c8
2 changed files with 97 additions and 97 deletions

View File

@ -43,14 +43,14 @@ import org.apache.hadoop.mapreduce.MRJobConfig;
* already present on the {@link FileSystem} at the path specified by the url * already present on the {@link FileSystem} at the path specified by the url
* and are accessible by every machine in the cluster.</p> * and are accessible by every machine in the cluster.</p>
* *
* <p>The framework will copy the necessary files on to the slave node before * <p>The framework will copy the necessary files on to the worker node before
* any tasks for the job are executed on that node. Its efficiency stems from * any tasks for the job are executed on that node. Its efficiency stems from
* the fact that the files are only copied once per job and the ability to * the fact that the files are only copied once per job and the ability to
* cache archives which are un-archived on the slaves.</p> * cache archives which are un-archived on the workers.</p>
* *
* <p><code>DistributedCache</code> can be used to distribute simple, read-only * <p><code>DistributedCache</code> can be used to distribute simple, read-only
* data/text files and/or more complex types such as archives, jars etc. * data/text files and/or more complex types such as archives, jars etc.
* Archives (zip, tar and tgz/tar.gz files) are un-archived at the slave nodes. * Archives (zip, tar and tgz/tar.gz files) are un-archived at the worker nodes.
* Jars may be optionally added to the classpath of the tasks, a rudimentary * Jars may be optionally added to the classpath of the tasks, a rudimentary
* software distribution mechanism. Files have execution permissions. * software distribution mechanism. Files have execution permissions.
* In older version of Hadoop Map/Reduce users could optionally ask for symlinks * In older version of Hadoop Map/Reduce users could optionally ask for symlinks
@ -83,11 +83,11 @@ import org.apache.hadoop.mapreduce.MRJobConfig;
* JobConf job = new JobConf(); * JobConf job = new JobConf();
* DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"), * DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"),
* job); * job);
* DistributedCache.addCacheArchive(new URI("/myapp/map.zip", job); * DistributedCache.addCacheArchive(new URI("/myapp/map.zip"), job);
* DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job); * DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar"), job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz"), job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz"), job);
* *
* 3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper} * 3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper}
* or {@link org.apache.hadoop.mapred.Reducer}: * or {@link org.apache.hadoop.mapred.Reducer}:

View File

@ -45,14 +45,14 @@ import java.net.URI;
* already present on the {@link FileSystem} at the path specified by the url * already present on the {@link FileSystem} at the path specified by the url
* and are accessible by every machine in the cluster.</p> * and are accessible by every machine in the cluster.</p>
* *
* <p>The framework will copy the necessary files on to the slave node before * <p>The framework will copy the necessary files on to the worker node before
* any tasks for the job are executed on that node. Its efficiency stems from * any tasks for the job are executed on that node. Its efficiency stems from
* the fact that the files are only copied once per job and the ability to * the fact that the files are only copied once per job and the ability to
* cache archives which are un-archived on the slaves.</p> * cache archives which are un-archived on the workers.</p>
* *
* <p><code>DistributedCache</code> can be used to distribute simple, read-only * <p><code>DistributedCache</code> can be used to distribute simple, read-only
* data/text files and/or more complex types such as archives, jars etc. * data/text files and/or more complex types such as archives, jars etc.
* Archives (zip, tar and tgz/tar.gz files) are un-archived at the slave nodes. * Archives (zip, tar and tgz/tar.gz files) are un-archived at the worker nodes.
* Jars may be optionally added to the classpath of the tasks, a rudimentary * Jars may be optionally added to the classpath of the tasks, a rudimentary
* software distribution mechanism. Files have execution permissions. * software distribution mechanism. Files have execution permissions.
* In older version of Hadoop Map/Reduce users could optionally ask for symlinks * In older version of Hadoop Map/Reduce users could optionally ask for symlinks
@ -85,11 +85,11 @@ import java.net.URI;
* JobConf job = new JobConf(); * JobConf job = new JobConf();
* DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"), * DistributedCache.addCacheFile(new URI("/myapp/lookup.dat#lookup.dat"),
* job); * job);
* DistributedCache.addCacheArchive(new URI("/myapp/map.zip", job); * DistributedCache.addCacheArchive(new URI("/myapp/map.zip"), job);
* DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job); * DistributedCache.addFileToClassPath(new Path("/myapp/mylib.jar"), job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytar.tar"), job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytgz.tgz"), job);
* DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz", job); * DistributedCache.addCacheArchive(new URI("/myapp/mytargz.tar.gz"), job);
* *
* 3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper} * 3. Use the cached files in the {@link org.apache.hadoop.mapred.Mapper}
* or {@link org.apache.hadoop.mapred.Reducer}: * or {@link org.apache.hadoop.mapred.Reducer}: