HDFS-6781. Separate HDFS commands from CommandsManual.apt.vm. (Contributed by Akira Ajisaka)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/branches/branch-2@1616576 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Arpit Agarwal 2014-08-07 19:46:14 +00:00
parent c6c4a74bef
commit 3f54884b46
4 changed files with 34 additions and 192 deletions

View File

@ -114,55 +114,18 @@ User Commands
* <<<fs>>>
Usage: <<<hadoop fs [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
Deprecated, use <<<hdfs dfs>>> instead.
Runs a generic filesystem user client.
The various COMMAND_OPTIONS can be found at File System Shell Guide.
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfs}<<<hdfs dfs>>>}}
instead.
* <<<fsck>>>
Runs a HDFS filesystem checking utility.
See {{{../hadoop-hdfs/HdfsUserGuide.html#fsck}fsck}} for more info.
Usage: <<<hadoop fsck [GENERIC_OPTIONS] <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]]>>>
*------------------+---------------------------------------------+
|| COMMAND_OPTION || Description
*------------------+---------------------------------------------+
| <path> | Start checking from this path.
*------------------+---------------------------------------------+
| -move | Move corrupted files to /lost+found
*------------------+---------------------------------------------+
| -delete | Delete corrupted files.
*------------------+---------------------------------------------+
| -openforwrite | Print out files opened for write.
*------------------+---------------------------------------------+
| -files | Print out files being checked.
*------------------+---------------------------------------------+
| -blocks | Print out block report.
*------------------+---------------------------------------------+
| -locations | Print out locations for every block.
*------------------+---------------------------------------------+
| -racks | Print out network topology for data-node locations.
*------------------+---------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fsck}<<<hdfs fsck>>>}}
instead.
* <<<fetchdt>>>
Gets Delegation Token from a NameNode.
See {{{../hadoop-hdfs/HdfsUserGuide.html#fetchdt}fetchdt}} for more info.
Usage: <<<hadoop fetchdt [GENERIC_OPTIONS] [--webservice <namenode_http_addr>] <path> >>>
*------------------------------+---------------------------------------------+
|| COMMAND_OPTION || Description
*------------------------------+---------------------------------------------+
| <fileName> | File name to store the token into.
*------------------------------+---------------------------------------------+
| --webservice <https_address> | use http protocol instead of RPC
*------------------------------+---------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fetchdt}
<<<hdfs fetchdt>>>}} instead.
* <<<jar>>>
@ -319,23 +282,8 @@ Administration Commands
* <<<balancer>>>
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
to stop the rebalancing process. See
{{{../hadoop-hdfs/HdfsUserGuide.html#Balancer}Balancer}} for more details.
Usage: <<<hadoop balancer [-threshold <threshold>] [-policy <policy>]>>>
*------------------------+-----------------------------------------------------------+
|| COMMAND_OPTION | Description
*------------------------+-----------------------------------------------------------+
| -threshold <threshold> | Percentage of disk capacity. This overwrites the
| default threshold.
*------------------------+-----------------------------------------------------------+
| -policy <policy> | <<<datanode>>> (default): Cluster is balanced if each datanode is balanced. \
| <<<blockpool>>>: Cluster is balanced if each block pool in each datanode is balanced.
*------------------------+-----------------------------------------------------------+
Note that the <<<blockpool>>> policy is more strict than the <<<datanode>>> policy.
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#balancer}
<<<hdfs balancer>>>}} instead.
* <<<daemonlog>>>
@ -358,84 +306,13 @@ Administration Commands
* <<<datanode>>>
Runs a HDFS datanode.
Usage: <<<hadoop datanode [-rollback]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| -rollback | Rollsback the datanode to the previous version. This should
| be used after stopping the datanode and distributing the old
| hadoop version.
*-----------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#datanode}
<<<hdfs datanode>>>}} instead.
* <<<dfsadmin>>>
Runs a HDFS dfsadmin client.
Usage: <<<hadoop dfsadmin [GENERIC_OPTIONS] [-report] [-safemode enter | leave | get | wait] [-refreshNodes] [-finalizeUpgrade] [-upgradeProgress status | details | force] [-metasave filename] [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>] [-restoreFailedStorage true|false|check] [-help [cmd]]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| -report | Reports basic filesystem information and statistics.
*-----------------+-----------------------------------------------------------+
| -safemode enter / leave / get / wait | Safe mode maintenance command. Safe
| mode is a Namenode state in which it \
| 1. does not accept changes to the name space (read-only) \
| 2. does not replicate or delete blocks. \
| Safe mode is entered automatically at Namenode startup, and
| leaves safe mode automatically when the configured minimum
| percentage of blocks satisfies the minimum replication
| condition. Safe mode can also be entered manually, but then
| it can only be turned off manually as well.
*-----------------+-----------------------------------------------------------+
| -refreshNodes | Re-read the hosts and exclude files to update the set of
| Datanodes that are allowed to connect to the Namenode and
| those that should be decommissioned or recommissioned.
*-----------------+-----------------------------------------------------------+
| -finalizeUpgrade| Finalize upgrade of HDFS. Datanodes delete their previous
| version working directories, followed by Namenode doing the
| same. This completes the upgrade process.
*-----------------+-----------------------------------------------------------+
| -upgradeProgress status / details / force | Request current distributed
| upgrade status, a detailed status or force the upgrade to
| proceed.
*-----------------+-----------------------------------------------------------+
| -metasave filename | Save Namenode's primary data structures to <filename> in
| the directory specified by hadoop.log.dir property.
| <filename> is overwritten if it exists.
| <filename> will contain one line for each of the following\
| 1. Datanodes heart beating with Namenode\
| 2. Blocks waiting to be replicated\
| 3. Blocks currrently being replicated\
| 4. Blocks waiting to be deleted\
*-----------------+-----------------------------------------------------------+
| -setQuota <quota> <dirname>...<dirname> | Set the quota <quota> for each
| directory <dirname>. The directory quota is a long integer
| that puts a hard limit on the number of names in the
| directory tree. Best effort for the directory, with faults
| reported if \
| 1. N is not a positive integer, or \
| 2. user is not an administrator, or \
| 3. the directory does not exist or is a file, or \
| 4. the directory would immediately exceed the new quota. \
*-----------------+-----------------------------------------------------------+
| -clrQuota <dirname>...<dirname> | Clear the quota for each directory
| <dirname>. Best effort for the directory. with fault
| reported if \
| 1. the directory does not exist or is a file, or \
| 2. user is not an administrator. It does not fault if the
| directory has no quota.
*-----------------+-----------------------------------------------------------+
| -restoreFailedStorage true / false / check | This option will turn on/off automatic attempt to restore failed storage replicas.
| If a failed storage becomes available again the system will attempt to restore
| edits and/or fsimage during checkpoint. 'check' option will return current setting.
*-----------------+-----------------------------------------------------------+
| -help [cmd] | Displays help for the given command or all commands if none
| is specified.
*-----------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfsadmin}
<<<hdfs dfsadmin>>>}} instead.
* <<<mradmin>>>
@ -468,51 +345,13 @@ Administration Commands
* <<<namenode>>>
Runs the namenode. More info about the upgrade, rollback and finalize is
at {{{../hadoop-hdfs/HdfsUserGuide.html#Upgrade_and_Rollback}Upgrade Rollback}}.
Usage: <<<hadoop namenode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]>>>
*--------------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*--------------------+-----------------------------------------------------------+
| -format | Formats the namenode. It starts the namenode, formats
| it and then shut it down.
*--------------------+-----------------------------------------------------------+
| -upgrade | Namenode should be started with upgrade option after
| the distribution of new hadoop version.
*--------------------+-----------------------------------------------------------+
| -rollback | Rollsback the namenode to the previous version. This
| should be used after stopping the cluster and
| distributing the old hadoop version.
*--------------------+-----------------------------------------------------------+
| -finalize | Finalize will remove the previous state of the files
| system. Recent upgrade will become permanent. Rollback
| option will not be available anymore. After finalization
| it shuts the namenode down.
*--------------------+-----------------------------------------------------------+
| -importCheckpoint | Loads image from a checkpoint directory and save it
| into the current one. Checkpoint dir is read from
| property fs.checkpoint.dir
*--------------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#namenode}
<<<hdfs namenode>>>}} instead.
* <<<secondarynamenode>>>
Runs the HDFS secondary namenode.
See {{{../hadoop-hdfs/HdfsUserGuide.html#Secondary_NameNode}Secondary Namenode}}
for more info.
Usage: <<<hadoop secondarynamenode [-checkpoint [force]] | [-geteditsize]>>>
*----------------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*----------------------+-----------------------------------------------------------+
| -checkpoint [-force] | Checkpoints the Secondary namenode if EditLog size
| >= fs.checkpoint.size. If <<<-force>>> is used,
| checkpoint irrespective of EditLog size.
*----------------------+-----------------------------------------------------------+
| -geteditsize | Prints the EditLog size.
*----------------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#secondarynamenode}
<<<hdfs secondarynamenode>>>}} instead.
* <<<tasktracker>>>

View File

@ -109,6 +109,9 @@ Release 2.6.0 - UNRELEASED
HDFS-6812. Remove addBlock and replaceBlock from DatanodeDescriptor.
(szetszwo)
HDFS-6781. Separate HDFS commands from CommandsManual.apt.vm. (Akira
Ajisaka via Arpit Agarwal)
OPTIMIZATIONS
HDFS-6690. Deduplicate xattr names in memory. (wang)

View File

@ -143,8 +143,8 @@ HDFS Users Guide
** DFSAdmin Command
The <<<bin/hadoop dfsadmin>>> command supports a few HDFS administration
related operations. The <<<bin/hadoop dfsadmin -help>>> command lists all the
The <<<bin/hdfs dfsadmin>>> command supports a few HDFS administration
related operations. The <<<bin/hdfs dfsadmin -help>>> command lists all the
commands currently supported. For e.g.:
* <<<-report>>>: reports basic statistics of HDFS. Some of this
@ -172,7 +172,7 @@ HDFS Users Guide
of racks and datanodes attached to the tracks as viewed by the
NameNode.
For command usage, see {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}.
For command usage, see {{{./HDFSCommands.html#dfsadmin}dfsadmin}}.
* Secondary NameNode
@ -207,7 +207,7 @@ HDFS Users Guide
primary NameNode if necessary.
For command usage,
see {{{../hadoop-common/CommandsManual.html#secondarynamenode}secondarynamenode}}.
see {{{./HDFSCommands.html#secondarynamenode}secondarynamenode}}.
* Checkpoint Node
@ -249,7 +249,7 @@ HDFS Users Guide
Multiple checkpoint nodes may be specified in the cluster configuration
file.
For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
For command usage, see {{{./HDFSCommands.html#namenode}namenode}}.
* Backup Node
@ -291,7 +291,7 @@ HDFS Users Guide
For a complete discussion of the motivation behind the creation of the
Backup node and Checkpoint node, see {{{https://issues.apache.org/jira/browse/HADOOP-4539}HADOOP-4539}}.
For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
For command usage, see {{{./HDFSCommands.html#namenode}namenode}}.
* Import Checkpoint
@ -314,7 +314,7 @@ HDFS Users Guide
verifies that the image in <<<dfs.namenode.checkpoint.dir>>> is consistent,
but does not modify it in any way.
For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
For command usage, see {{{./HDFSCommands.html#namenode}namenode}}.
* Balancer
@ -341,7 +341,7 @@ HDFS Users Guide
A brief administrator's guide for balancer is available at
{{{https://issues.apache.org/jira/browse/HADOOP-1652}HADOOP-1652}}.
For command usage, see {{{../hadoop-common/CommandsManual.html#balancer}balancer}}.
For command usage, see {{{./HDFSCommands.html#balancer}balancer}}.
* Rack Awareness
@ -368,7 +368,7 @@ HDFS Users Guide
allow any modifications to file system or blocks. Normally the NameNode
leaves Safemode automatically after the DataNodes have reported that
most file system blocks are available. If required, HDFS could be
placed in Safemode explicitly using <<<bin/hadoop dfsadmin -safemode>>>
placed in Safemode explicitly using <<<bin/hdfs dfsadmin -safemode>>>
command. NameNode front page shows whether Safemode is on or off. A
more detailed description and configuration is maintained as JavaDoc
for <<<setSafeMode()>>>.
@ -383,8 +383,8 @@ HDFS Users Guide
most of the recoverable failures. By default fsck ignores open files
but provides an option to select all files during reporting. The HDFS
fsck command is not a Hadoop shell command. It can be run as
<<<bin/hadoop fsck>>>. For command usage, see
{{{../hadoop-common/CommandsManual.html#fsck}fsck}}. fsck can be run on
<<<bin/hdfs fsck>>>. For command usage, see
{{{./HDFSCommands.html#fsck}fsck}}. fsck can be run on
the whole file system or on a subset of files.
* fetchdt
@ -395,11 +395,11 @@ HDFS Users Guide
Utility uses either RPC or HTTPS (over Kerberos) to get the token, and
thus requires kerberos tickets to be present before the run (run kinit
to get the tickets). The HDFS fetchdt command is not a Hadoop shell
command. It can be run as <<<bin/hadoop fetchdt DTfile>>>. After you got
command. It can be run as <<<bin/hdfs fetchdt DTfile>>>. After you got
the token you can run an HDFS command without having Kerberos tickets,
by pointing <<<HADOOP_TOKEN_FILE_LOCATION>>> environmental variable to the
delegation token file. For command usage, see
{{{../hadoop-common/CommandsManual.html#fetchdt}fetchdt}} command.
{{{./HDFSCommands.html#fetchdt}fetchdt}} command.
* Recovery Mode
@ -533,5 +533,4 @@ HDFS Users Guide
* Explore {{{./hdfs-default.xml}hdfs-default.xml}}. It includes
brief description of most of the configuration variables available.
* {{{../hadoop-common/CommandsManual.html}Hadoop Commands Guide}}:
Hadoop commands usage.
* {{{./HDFSCommands.html}HDFS Commands Guide}}: HDFS commands usage.

View File

@ -68,6 +68,7 @@
<menu name="HDFS" inherit="top">
<item name="HDFS User Guide" href="hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html"/>
<item name="HDFS Commands Reference" href="hadoop-project-dist/hadoop-hdfs/HDFSCommands.html"/>
<item name="High Availability With QJM" href="hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html"/>
<item name="High Availability With NFS" href="hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html"/>
<item name="Federation" href="hadoop-project-dist/hadoop-hdfs/Federation.html"/>