HDFS-6781. Separate HDFS commands from CommandsManual.apt.vm. (Contributed by Akira Ajisaka)

git-svn-id: https://svn.apache.org/repos/asf/hadoop/common/trunk@1616575 13f79535-47bb-0310-9956-ffa450edef68
This commit is contained in:
Arpit Agarwal 2014-08-07 19:42:09 +00:00
parent 3bfdb0091d
commit b98400df7b
5 changed files with 442 additions and 194 deletions

View File

@ -114,57 +114,18 @@ User Commands
* <<<fs>>>
Usage: <<<hadoop fs [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
Deprecated, use <<<hdfs dfs>>> instead.
Runs a generic filesystem user client.
The various COMMAND_OPTIONS can be found at File System Shell Guide.
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfs}<<<hdfs dfs>>>}}
instead.
* <<<fsck>>>
Runs a HDFS filesystem checking utility.
See {{{../hadoop-hdfs/HdfsUserGuide.html#fsck}fsck}} for more info.
Usage: <<<hadoop fsck [GENERIC_OPTIONS] <path> [-move | -delete | -openforwrite] [-files [-blocks [-locations | -racks]]] [-showprogress]>>>
*------------------+---------------------------------------------+
|| COMMAND_OPTION || Description
*------------------+---------------------------------------------+
| <path> | Start checking from this path.
*------------------+---------------------------------------------+
| -move | Move corrupted files to /lost+found
*------------------+---------------------------------------------+
| -delete | Delete corrupted files.
*------------------+---------------------------------------------+
| -openforwrite | Print out files opened for write.
*------------------+---------------------------------------------+
| -files | Print out files being checked.
*------------------+---------------------------------------------+
| -blocks | Print out block report.
*------------------+---------------------------------------------+
| -locations | Print out locations for every block.
*------------------+---------------------------------------------+
| -racks | Print out network topology for data-node locations.
*------------------+---------------------------------------------+
| -showprogress | Print out show progress in output. Default is OFF (no progress).
*------------------+---------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fsck}<<<hdfs fsck>>>}}
instead.
* <<<fetchdt>>>
Gets Delegation Token from a NameNode.
See {{{../hadoop-hdfs/HdfsUserGuide.html#fetchdt}fetchdt}} for more info.
Usage: <<<hadoop fetchdt [GENERIC_OPTIONS] [--webservice <namenode_http_addr>] <path> >>>
*------------------------------+---------------------------------------------+
|| COMMAND_OPTION || Description
*------------------------------+---------------------------------------------+
| <fileName> | File name to store the token into.
*------------------------------+---------------------------------------------+
| --webservice <https_address> | use http protocol instead of RPC
*------------------------------+---------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#fetchdt}
<<<hdfs fetchdt>>>}} instead.
* <<<jar>>>
@ -321,23 +282,8 @@ Administration Commands
* <<<balancer>>>
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
to stop the rebalancing process. See
{{{../hadoop-hdfs/HdfsUserGuide.html#Balancer}Balancer}} for more details.
Usage: <<<hadoop balancer [-threshold <threshold>] [-policy <policy>]>>>
*------------------------+-----------------------------------------------------------+
|| COMMAND_OPTION | Description
*------------------------+-----------------------------------------------------------+
| -threshold <threshold> | Percentage of disk capacity. This overwrites the
| default threshold.
*------------------------+-----------------------------------------------------------+
| -policy <policy> | <<<datanode>>> (default): Cluster is balanced if each datanode is balanced. \
| <<<blockpool>>>: Cluster is balanced if each block pool in each datanode is balanced.
*------------------------+-----------------------------------------------------------+
Note that the <<<blockpool>>> policy is more strict than the <<<datanode>>> policy.
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#balancer}
<<<hdfs balancer>>>}} instead.
* <<<daemonlog>>>
@ -360,84 +306,13 @@ Administration Commands
* <<<datanode>>>
Runs a HDFS datanode.
Usage: <<<hadoop datanode [-rollback]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| -rollback | Rollsback the datanode to the previous version. This should
| be used after stopping the datanode and distributing the old
| hadoop version.
*-----------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#datanode}
<<<hdfs datanode>>>}} instead.
* <<<dfsadmin>>>
Runs a HDFS dfsadmin client.
Usage: <<<hadoop dfsadmin [GENERIC_OPTIONS] [-report] [-safemode enter | leave | get | wait] [-refreshNodes] [-finalizeUpgrade] [-upgradeProgress status | details | force] [-metasave filename] [-setQuota <quota> <dirname>...<dirname>] [-clrQuota <dirname>...<dirname>] [-restoreFailedStorage true|false|check] [-help [cmd]]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| -report | Reports basic filesystem information and statistics.
*-----------------+-----------------------------------------------------------+
| -safemode enter / leave / get / wait | Safe mode maintenance command. Safe
| mode is a Namenode state in which it \
| 1. does not accept changes to the name space (read-only) \
| 2. does not replicate or delete blocks. \
| Safe mode is entered automatically at Namenode startup, and
| leaves safe mode automatically when the configured minimum
| percentage of blocks satisfies the minimum replication
| condition. Safe mode can also be entered manually, but then
| it can only be turned off manually as well.
*-----------------+-----------------------------------------------------------+
| -refreshNodes | Re-read the hosts and exclude files to update the set of
| Datanodes that are allowed to connect to the Namenode and
| those that should be decommissioned or recommissioned.
*-----------------+-----------------------------------------------------------+
| -finalizeUpgrade| Finalize upgrade of HDFS. Datanodes delete their previous
| version working directories, followed by Namenode doing the
| same. This completes the upgrade process.
*-----------------+-----------------------------------------------------------+
| -upgradeProgress status / details / force | Request current distributed
| upgrade status, a detailed status or force the upgrade to
| proceed.
*-----------------+-----------------------------------------------------------+
| -metasave filename | Save Namenode's primary data structures to <filename> in
| the directory specified by hadoop.log.dir property.
| <filename> is overwritten if it exists.
| <filename> will contain one line for each of the following\
| 1. Datanodes heart beating with Namenode\
| 2. Blocks waiting to be replicated\
| 3. Blocks currrently being replicated\
| 4. Blocks waiting to be deleted\
*-----------------+-----------------------------------------------------------+
| -setQuota <quota> <dirname>...<dirname> | Set the quota <quota> for each
| directory <dirname>. The directory quota is a long integer
| that puts a hard limit on the number of names in the
| directory tree. Best effort for the directory, with faults
| reported if \
| 1. N is not a positive integer, or \
| 2. user is not an administrator, or \
| 3. the directory does not exist or is a file, or \
| 4. the directory would immediately exceed the new quota. \
*-----------------+-----------------------------------------------------------+
| -clrQuota <dirname>...<dirname> | Clear the quota for each directory
| <dirname>. Best effort for the directory. with fault
| reported if \
| 1. the directory does not exist or is a file, or \
| 2. user is not an administrator. It does not fault if the
| directory has no quota.
*-----------------+-----------------------------------------------------------+
| -restoreFailedStorage true / false / check | This option will turn on/off automatic attempt to restore failed storage replicas.
| If a failed storage becomes available again the system will attempt to restore
| edits and/or fsimage during checkpoint. 'check' option will return current setting.
*-----------------+-----------------------------------------------------------+
| -help [cmd] | Displays help for the given command or all commands if none
| is specified.
*-----------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#dfsadmin}
<<<hdfs dfsadmin>>>}} instead.
* <<<mradmin>>>
@ -470,51 +345,13 @@ Administration Commands
* <<<namenode>>>
Runs the namenode. More info about the upgrade, rollback and finalize is
at {{{../hadoop-hdfs/HdfsUserGuide.html#Upgrade_and_Rollback}Upgrade Rollback}}.
Usage: <<<hadoop namenode [-format] | [-upgrade] | [-rollback] | [-finalize] | [-importCheckpoint]>>>
*--------------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*--------------------+-----------------------------------------------------------+
| -format | Formats the namenode. It starts the namenode, formats
| it and then shut it down.
*--------------------+-----------------------------------------------------------+
| -upgrade | Namenode should be started with upgrade option after
| the distribution of new hadoop version.
*--------------------+-----------------------------------------------------------+
| -rollback | Rollsback the namenode to the previous version. This
| should be used after stopping the cluster and
| distributing the old hadoop version.
*--------------------+-----------------------------------------------------------+
| -finalize | Finalize will remove the previous state of the files
| system. Recent upgrade will become permanent. Rollback
| option will not be available anymore. After finalization
| it shuts the namenode down.
*--------------------+-----------------------------------------------------------+
| -importCheckpoint | Loads image from a checkpoint directory and save it
| into the current one. Checkpoint dir is read from
| property fs.checkpoint.dir
*--------------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#namenode}
<<<hdfs namenode>>>}} instead.
* <<<secondarynamenode>>>
Runs the HDFS secondary namenode.
See {{{../hadoop-hdfs/HdfsUserGuide.html#Secondary_NameNode}Secondary Namenode}}
for more info.
Usage: <<<hadoop secondarynamenode [-checkpoint [force]] | [-geteditsize]>>>
*----------------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*----------------------+-----------------------------------------------------------+
| -checkpoint [-force] | Checkpoints the Secondary namenode if EditLog size
| >= fs.checkpoint.size. If <<<-force>>> is used,
| checkpoint irrespective of EditLog size.
*----------------------+-----------------------------------------------------------+
| -geteditsize | Prints the EditLog size.
*----------------------+-----------------------------------------------------------+
Deprecated, use {{{../hadoop-hdfs/HDFSCommands.html#secondarynamenode}
<<<hdfs secondarynamenode>>>}} instead.
* <<<tasktracker>>>

View File

@ -367,6 +367,9 @@ Release 2.6.0 - UNRELEASED
HDFS-6812. Remove addBlock and replaceBlock from DatanodeDescriptor.
(szetszwo)
HDFS-6781. Separate HDFS commands from CommandsManual.apt.vm. (Akira
Ajisaka via Arpit Agarwal)
OPTIMIZATIONS
HDFS-6690. Deduplicate xattr names in memory. (wang)

View File

@ -0,0 +1,408 @@
~~ Licensed under the Apache License, Version 2.0 (the "License");
~~ you may not use this file except in compliance with the License.
~~ You may obtain a copy of the License at
~~
~~ http://www.apache.org/licenses/LICENSE-2.0
~~
~~ Unless required by applicable law or agreed to in writing, software
~~ distributed under the License is distributed on an "AS IS" BASIS,
~~ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
~~ See the License for the specific language governing permissions and
~~ limitations under the License. See accompanying LICENSE file.
---
HDFS Commands Guide
---
---
${maven.build.timestamp}
HDFS Commands Guide
%{toc|section=1|fromDepth=2|toDepth=4}
* Overview
All HDFS commands are invoked by the <<<bin/hdfs>>> script. Running the
hdfs script without any arguments prints the description for all
commands.
Usage: <<<hdfs [--config confdir] [COMMAND] [GENERIC_OPTIONS]
[COMMAND_OPTIONS]>>>
Hadoop has an option parsing framework that employs parsing generic options
as well as running classes.
*-----------------------+---------------+
|| COMMAND_OPTION || Description
*-----------------------+---------------+
| <<<--config confdir>>>| Overwrites the default Configuration directory.
| | Default is <<<${HADOOP_HOME}/conf>>>.
*-----------------------+---------------+
| GENERIC_OPTIONS | The common set of options supported by multiple
| | commands. Full list is
| | {{{../hadoop-common/CommandsManual.html#Generic_Options}here}}.
*-----------------------+---------------+
| COMMAND_OPTIONS | Various commands with their options are described in
| | the following sections. The commands have been
| | grouped into {{{User Commands}}} and
| | {{{Administration Commands}}}.
*-----------------------+---------------+
* User Commands
Commands useful for users of a hadoop cluster.
** <<<dfs>>>
Usage: <<<hdfs dfs [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
Run a filesystem command on the file system supported in Hadoop.
The various COMMAND_OPTIONS can be found at
{{{../hadoop-common/FileSystemShell.html}File System Shell Guide}}.
** <<<fetchdt>>>
Gets Delegation Token from a NameNode.
See {{{./HdfsUserGuide.html#fetchdt}fetchdt}} for more info.
Usage: <<<hdfs fetchdt [GENERIC_OPTIONS]
[--webservice <namenode_http_addr>] <path> >>>
*------------------------------+---------------------------------------------+
|| COMMAND_OPTION || Description
*------------------------------+---------------------------------------------+
| <fileName> | File name to store the token into.
*------------------------------+---------------------------------------------+
| --webservice <https_address> | use http protocol instead of RPC
*------------------------------+---------------------------------------------+
** <<<fsck>>>
Runs a HDFS filesystem checking utility.
See {{{./HdfsUserGuide.html#fsck}fsck}} for more info.
Usage: <<<hdfs fsck [GENERIC_OPTIONS] <path>
[-move | -delete | -openforwrite]
[-files [-blocks [-locations | -racks]]]
[-showprogress]>>>
*------------------+---------------------------------------------+
|| COMMAND_OPTION || Description
*------------------+---------------------------------------------+
| <path> | Start checking from this path.
*------------------+---------------------------------------------+
| -move | Move corrupted files to /lost+found
*------------------+---------------------------------------------+
| -delete | Delete corrupted files.
*------------------+---------------------------------------------+
| -openforwrite | Print out files opened for write.
*------------------+---------------------------------------------+
| -files | Print out files being checked.
*------------------+---------------------------------------------+
| -blocks | Print out block report.
*------------------+---------------------------------------------+
| -locations | Print out locations for every block.
*------------------+---------------------------------------------+
| -racks | Print out network topology for data-node locations.
*------------------+---------------------------------------------+
| -showprogress | Print out dots for progress in output. Default is OFF
| | (no progress).
*------------------+---------------------------------------------+
* Administration Commands
Commands useful for administrators of a hadoop cluster.
** <<<balancer>>>
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
to stop the rebalancing process. See
{{{./HdfsUserGuide.html#Balancer}Balancer}} for more details.
Usage: <<<hdfs balancer [-threshold <threshold>] [-policy <policy>]>>>
*------------------------+----------------------------------------------------+
|| COMMAND_OPTION | Description
*------------------------+----------------------------------------------------+
| -threshold <threshold> | Percentage of disk capacity. This overwrites the
| | default threshold.
*------------------------+----------------------------------------------------+
| -policy <policy> | <<<datanode>>> (default): Cluster is balanced if
| | each datanode is balanced. \
| | <<<blockpool>>>: Cluster is balanced if each block
| | pool in each datanode is balanced.
*------------------------+----------------------------------------------------+
Note that the <<<blockpool>>> policy is more strict than the <<<datanode>>>
policy.
** <<<datanode>>>
Runs a HDFS datanode.
Usage: <<<hdfs datanode [-regular | -rollback | -rollingupgrace rollback]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| -regular | Normal datanode startup (default).
*-----------------+-----------------------------------------------------------+
| -rollback | Rollsback the datanode to the previous version. This should
| | be used after stopping the datanode and distributing the
| | old hadoop version.
*-----------------+-----------------------------------------------------------+
| -rollingupgrade rollback | Rollsback a rolling upgrade operation.
*-----------------+-----------------------------------------------------------+
** <<<dfsadmin>>>
Runs a HDFS dfsadmin client.
Usage: <<<hdfs dfsadmin [GENERIC_OPTIONS]
[-report [-live] [-dead] [-decommissioning]]
[-safemode enter | leave | get | wait]
[-saveNamespace]
[-rollEdits]
[-restoreFailedStorage true|false|check]
[-refreshNodes]
[-setQuota <quota> <dirname>...<dirname>]
[-clrQuota <dirname>...<dirname>]
[-setSpaceQuota <quota> <dirname>...<dirname>]
[-clrSpaceQuota <dirname>...<dirname>]
[-finalizeUpgrade]
[-rollingUpgrade [<query>|<prepare>|<finalize>]]
[-metasave filename]
[-refreshServiceAcl]
[-refreshUserToGroupsMappings]
[-refreshSuperUserGroupsConfiguration]
[-refreshCallQueue]
[-refresh <host:ipc_port> <key> [arg1..argn]]
[-printTopology]
[-refreshNamenodes datanodehost:port]
[-deleteBlockPool datanode-host:port blockpoolId [force]]
[-setBalancerBandwidth <bandwidth in bytes per second>]
[-allowSnapshot <snapshotDir>]
[-disallowSnapshot <snapshotDir>]
[-fetchImage <local directory>]
[-shutdownDatanode <datanode_host:ipc_port> [upgrade]]
[-getDatanodeInfo <datanode_host:ipc_port>]
[-help [cmd]]>>>
*-----------------+-----------------------------------------------------------+
|| COMMAND_OPTION || Description
*-----------------+-----------------------------------------------------------+
| -report [-live] [-dead] [-decommissioning] | Reports basic filesystem
| information and statistics. Optional flags may be used to
| filter the list of displayed DataNodes.
*-----------------+-----------------------------------------------------------+
| -safemode enter\|leave\|get\|wait | Safe mode maintenance command. Safe
| mode is a Namenode state in which it \
| 1. does not accept changes to the name space (read-only) \
| 2. does not replicate or delete blocks. \
| Safe mode is entered automatically at Namenode startup, and
| leaves safe mode automatically when the configured minimum
| percentage of blocks satisfies the minimum replication
| condition. Safe mode can also be entered manually, but then
| it can only be turned off manually as well.
*-----------------+-----------------------------------------------------------+
| -saveNamespace | Save current namespace into storage directories and reset
| edits log. Requires safe mode.
*-----------------+-----------------------------------------------------------+
| -rollEdits | Rolls the edit log on the active NameNode.
*-----------------+-----------------------------------------------------------+
| -restoreFailedStorage true\|false\|check | This option will turn on/off
| automatic attempt to restore failed storage replicas.
| If a failed storage becomes available again the system will
| attempt to restore edits and/or fsimage during checkpoint.
| 'check' option will return current setting.
*-----------------+-----------------------------------------------------------+
| -refreshNodes | Re-read the hosts and exclude files to update the set of
| Datanodes that are allowed to connect to the Namenode and
| those that should be decommissioned or recommissioned.
*-----------------+-----------------------------------------------------------+
| -setQuota \<quota\> \<dirname\>...\<dirname\> | See
| {{{../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands}HDFS Quotas Guide}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -clrQuota \<dirname\>...\<dirname\> | See
| {{{../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands}HDFS Quotas Guide}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -setSpaceQuota \<quota\> \<dirname\>...\<dirname\> | See
| {{{../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands}HDFS Quotas Guide}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -clrSpaceQuota \<dirname\>...\<dirname\> | See
| {{{../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands}HDFS Quotas Guide}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -finalizeUpgrade| Finalize upgrade of HDFS. Datanodes delete their previous
| version working directories, followed by Namenode doing the
| same. This completes the upgrade process.
*-----------------+-----------------------------------------------------------+
| -rollingUpgrade [\<query\>\|\<prepare\>\|\<finalize\>] | See
| {{{../hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade}Rolling Upgrade document}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -metasave filename | Save Namenode's primary data structures to <filename> in
| the directory specified by hadoop.log.dir property.
| <filename> is overwritten if it exists.
| <filename> will contain one line for each of the following\
| 1. Datanodes heart beating with Namenode\
| 2. Blocks waiting to be replicated\
| 3. Blocks currrently being replicated\
| 4. Blocks waiting to be deleted
*-----------------+-----------------------------------------------------------+
| -refreshServiceAcl | Reload the service-level authorization policy file.
*-----------------+-----------------------------------------------------------+
| -refreshUserToGroupsMappings | Refresh user-to-groups mappings.
*-----------------+-----------------------------------------------------------+
| -refreshSuperUserGroupsConfiguration |Refresh superuser proxy groups mappings
*-----------------+-----------------------------------------------------------+
| -refreshCallQueue | Reload the call queue from config.
*-----------------+-----------------------------------------------------------+
| -refresh \<host:ipc_port\> \<key\> [arg1..argn] | Triggers a runtime-refresh
| of the resource specified by \<key\> on \<host:ipc_port\>.
| All other args after are sent to the host.
*-----------------+-----------------------------------------------------------+
| -printTopology | Print a tree of the racks and their nodes as reported by
| the Namenode
*-----------------+-----------------------------------------------------------+
| -refreshNamenodes datanodehost:port | For the given datanode, reloads the
| configuration files, stops serving the removed block-pools
| and starts serving new block-pools.
*-----------------+-----------------------------------------------------------+
| -deleteBlockPool datanode-host:port blockpoolId [force] | If force is passed,
| block pool directory for the given blockpool id on the
| given datanode is deleted along with its contents,
| otherwise the directory is deleted only if it is empty.
| The command will fail if datanode is still serving the
| block pool. Refer to refreshNamenodes to shutdown a block
| pool service on a datanode.
*-----------------+-----------------------------------------------------------+
| -setBalancerBandwidth \<bandwidth in bytes per second\> | Changes the network
| bandwidth used by each datanode during HDFS block
| balancing. \<bandwidth\> is the maximum number of bytes per
| second that will be used by each datanode. This value
| overrides the dfs.balance.bandwidthPerSec parameter.\
| NOTE: The new value is not persistent on the DataNode.
*-----------------+-----------------------------------------------------------+
| -allowSnapshot \<snapshotDir\> | Allowing snapshots of a directory to be
| created. If the operation completes successfully, the
| directory becomes snapshottable.
*-----------------+-----------------------------------------------------------+
| -disallowSnapshot \<snapshotDir\> | Disallowing snapshots of a directory to
| be created. All snapshots of the directory must be deleted
| before disallowing snapshots.
*-----------------+-----------------------------------------------------------+
| -fetchImage \<local directory\> | Downloads the most recent fsimage from the
| NameNode and saves it in the specified local directory.
*-----------------+-----------------------------------------------------------+
| -shutdownDatanode \<datanode_host:ipc_port\> [upgrade] | Submit a shutdown
| request for the given datanode. See
| {{{./HdfsRollingUpgrade.html#dfsadmin_-shutdownDatanode}Rolling Upgrade document}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -getDatanodeInfo \<datanode_host:ipc_port\> | Get the information about the
| given datanode. See
| {{{./HdfsRollingUpgrade.html#dfsadmin_-getDatanodeInfo}Rolling Upgrade document}}
| for the detail.
*-----------------+-----------------------------------------------------------+
| -help [cmd] | Displays help for the given command or all commands if none
| is specified.
*-----------------+-----------------------------------------------------------+
** <<<namenode>>>
Runs the namenode. More info about the upgrade, rollback and finalize is at
{{{./HdfsUserGuide.html#Upgrade_and_Rollback}Upgrade Rollback}}.
Usage: <<<hdfs namenode [-backup] |
[-checkpoint] |
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
[-rollback] |
[-rollingUpgrade <downgrade|rollback> ] |
[-finalize] |
[-importCheckpoint] |
[-initializeSharedEdits] |
[-bootstrapStandby] |
[-recover [-force] ] |
[-metadataVersion ]>>>
*--------------------+--------------------------------------------------------+
|| COMMAND_OPTION || Description
*--------------------+--------------------------------------------------------+
| -backup | Start backup node.
*--------------------+--------------------------------------------------------+
| -checkpoint | Start checkpoint node.
*--------------------+--------------------------------------------------------+
| -format [-clusterid cid] [-force] [-nonInteractive] | Formats the specified
| NameNode. It starts the NameNode, formats it and then
| shut it down. -force option formats if the name
| directory exists. -nonInteractive option aborts if the
| name directory exists, unless -force option is specified.
*--------------------+--------------------------------------------------------+
| -upgrade [-clusterid cid] [-renameReserved\<k-v pairs\>] | Namenode should be
| started with upgrade option after
| the distribution of new Hadoop version.
*--------------------+--------------------------------------------------------+
| -upgradeOnly [-clusterid cid] [-renameReserved\<k-v pairs\>] | Upgrade the
| specified NameNode and then shutdown it.
*--------------------+--------------------------------------------------------+
| -rollback | Rollsback the NameNode to the previous version. This
| should be used after stopping the cluster and
| distributing the old Hadoop version.
*--------------------+--------------------------------------------------------+
| -rollingUpgrade \<downgrade\|rollback\|started\> | See
| {{{./HdfsRollingUpgrade.html#NameNode_Startup_Options}Rolling Upgrade document}}
| for the detail.
*--------------------+--------------------------------------------------------+
| -finalize | Finalize will remove the previous state of the files
| system. Recent upgrade will become permanent. Rollback
| option will not be available anymore. After finalization
| it shuts the NameNode down.
*--------------------+--------------------------------------------------------+
| -importCheckpoint | Loads image from a checkpoint directory and save it
| into the current one. Checkpoint dir is read from
| property fs.checkpoint.dir
*--------------------+--------------------------------------------------------+
| -initializeSharedEdits | Format a new shared edits dir and copy in enough
| edit log segments so that the standby NameNode can start
| up.
*--------------------+--------------------------------------------------------+
| -bootstrapStandby | Allows the standby NameNode's storage directories to be
| bootstrapped by copying the latest namespace snapshot
| from the active NameNode. This is used when first
| configuring an HA cluster.
*--------------------+--------------------------------------------------------+
| -recover [-force] | Recover lost metadata on a corrupt filesystem. See
| {{{./HdfsUserGuide.html#Recovery_Mode}HDFS User Guide}}
| for the detail.
*--------------------+--------------------------------------------------------+
| -metadataVersion | Verify that configured directories exist, then print the
| metadata versions of the software and the image.
*--------------------+--------------------------------------------------------+
** <<<secondarynamenode>>>
Runs the HDFS secondary namenode.
See {{{./HdfsUserGuide.html#Secondary_NameNode}Secondary Namenode}}
for more info.
Usage: <<<hdfs secondarynamenode [-checkpoint [force]] | [-format] |
[-geteditsize]>>>
*----------------------+------------------------------------------------------+
|| COMMAND_OPTION || Description
*----------------------+------------------------------------------------------+
| -checkpoint [force] | Checkpoints the SecondaryNameNode if EditLog size
| >= fs.checkpoint.size. If <<<force>>> is used,
| checkpoint irrespective of EditLog size.
*----------------------+------------------------------------------------------+
| -format | Format the local storage during startup.
*----------------------+------------------------------------------------------+
| -geteditsize | Prints the number of uncheckpointed transactions on
| the NameNode.
*----------------------+------------------------------------------------------+

View File

@ -143,8 +143,8 @@ HDFS Users Guide
** DFSAdmin Command
The <<<bin/hadoop dfsadmin>>> command supports a few HDFS administration
related operations. The <<<bin/hadoop dfsadmin -help>>> command lists all the
The <<<bin/hdfs dfsadmin>>> command supports a few HDFS administration
related operations. The <<<bin/hdfs dfsadmin -help>>> command lists all the
commands currently supported. For e.g.:
* <<<-report>>>: reports basic statistics of HDFS. Some of this
@ -172,7 +172,7 @@ HDFS Users Guide
of racks and datanodes attached to the tracks as viewed by the
NameNode.
For command usage, see {{{../hadoop-common/CommandsManual.html#dfsadmin}dfsadmin}}.
For command usage, see {{{./HDFSCommands.html#dfsadmin}dfsadmin}}.
* Secondary NameNode
@ -207,7 +207,7 @@ HDFS Users Guide
primary NameNode if necessary.
For command usage,
see {{{../hadoop-common/CommandsManual.html#secondarynamenode}secondarynamenode}}.
see {{{./HDFSCommands.html#secondarynamenode}secondarynamenode}}.
* Checkpoint Node
@ -249,7 +249,7 @@ HDFS Users Guide
Multiple checkpoint nodes may be specified in the cluster configuration
file.
For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
For command usage, see {{{./HDFSCommands.html#namenode}namenode}}.
* Backup Node
@ -291,7 +291,7 @@ HDFS Users Guide
For a complete discussion of the motivation behind the creation of the
Backup node and Checkpoint node, see {{{https://issues.apache.org/jira/browse/HADOOP-4539}HADOOP-4539}}.
For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
For command usage, see {{{./HDFSCommands.html#namenode}namenode}}.
* Import Checkpoint
@ -314,7 +314,7 @@ HDFS Users Guide
verifies that the image in <<<dfs.namenode.checkpoint.dir>>> is consistent,
but does not modify it in any way.
For command usage, see {{{../hadoop-common/CommandsManual.html#namenode}namenode}}.
For command usage, see {{{./HDFSCommands.html#namenode}namenode}}.
* Balancer
@ -341,7 +341,7 @@ HDFS Users Guide
A brief administrator's guide for balancer is available at
{{{https://issues.apache.org/jira/browse/HADOOP-1652}HADOOP-1652}}.
For command usage, see {{{../hadoop-common/CommandsManual.html#balancer}balancer}}.
For command usage, see {{{./HDFSCommands.html#balancer}balancer}}.
* Rack Awareness
@ -368,7 +368,7 @@ HDFS Users Guide
allow any modifications to file system or blocks. Normally the NameNode
leaves Safemode automatically after the DataNodes have reported that
most file system blocks are available. If required, HDFS could be
placed in Safemode explicitly using <<<bin/hadoop dfsadmin -safemode>>>
placed in Safemode explicitly using <<<bin/hdfs dfsadmin -safemode>>>
command. NameNode front page shows whether Safemode is on or off. A
more detailed description and configuration is maintained as JavaDoc
for <<<setSafeMode()>>>.
@ -383,8 +383,8 @@ HDFS Users Guide
most of the recoverable failures. By default fsck ignores open files
but provides an option to select all files during reporting. The HDFS
fsck command is not a Hadoop shell command. It can be run as
<<<bin/hadoop fsck>>>. For command usage, see
{{{../hadoop-common/CommandsManual.html#fsck}fsck}}. fsck can be run on
<<<bin/hdfs fsck>>>. For command usage, see
{{{./HDFSCommands.html#fsck}fsck}}. fsck can be run on
the whole file system or on a subset of files.
* fetchdt
@ -395,11 +395,11 @@ HDFS Users Guide
Utility uses either RPC or HTTPS (over Kerberos) to get the token, and
thus requires kerberos tickets to be present before the run (run kinit
to get the tickets). The HDFS fetchdt command is not a Hadoop shell
command. It can be run as <<<bin/hadoop fetchdt DTfile>>>. After you got
command. It can be run as <<<bin/hdfs fetchdt DTfile>>>. After you got
the token you can run an HDFS command without having Kerberos tickets,
by pointing <<<HADOOP_TOKEN_FILE_LOCATION>>> environmental variable to the
delegation token file. For command usage, see
{{{../hadoop-common/CommandsManual.html#fetchdt}fetchdt}} command.
{{{./HDFSCommands.html#fetchdt}fetchdt}} command.
* Recovery Mode
@ -533,5 +533,4 @@ HDFS Users Guide
* Explore {{{./hdfs-default.xml}hdfs-default.xml}}. It includes
brief description of most of the configuration variables available.
* {{{../hadoop-common/CommandsManual.html}Hadoop Commands Guide}}:
Hadoop commands usage.
* {{{./HDFSCommands.html}HDFS Commands Guide}}: HDFS commands usage.

View File

@ -69,6 +69,7 @@
<menu name="HDFS" inherit="top">
<item name="HDFS User Guide" href="hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html"/>
<item name="HDFS Commands Reference" href="hadoop-project-dist/hadoop-hdfs/HDFSCommands.html"/>
<item name="High Availability With QJM" href="hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html"/>
<item name="High Availability With NFS" href="hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html"/>
<item name="Federation" href="hadoop-project-dist/hadoop-hdfs/Federation.html"/>