|
|
|
@ -18,7 +18,7 @@
|
|
|
|
|
|
|
|
|
|
HDFS Commands Guide
|
|
|
|
|
|
|
|
|
|
%{toc|section=1|fromDepth=2|toDepth=4}
|
|
|
|
|
%{toc|section=1|fromDepth=2|toDepth=3}
|
|
|
|
|
|
|
|
|
|
* Overview
|
|
|
|
|
|
|
|
|
@ -26,39 +26,37 @@ HDFS Commands Guide
|
|
|
|
|
hdfs script without any arguments prints the description for all
|
|
|
|
|
commands.
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs [--config confdir] [--loglevel loglevel] [COMMAND]
|
|
|
|
|
[GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
|
|
|
|
|
Usage: <<<hdfs [SHELL_OPTIONS] COMMAND [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
|
|
|
|
|
|
|
|
|
|
Hadoop has an option parsing framework that employs parsing generic options
|
|
|
|
|
as well as running classes.
|
|
|
|
|
Hadoop has an option parsing framework that employs parsing generic options as
|
|
|
|
|
well as running classes.
|
|
|
|
|
|
|
|
|
|
*-----------------------+---------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*-----------------------+---------------+
|
|
|
|
|
| <<<--config confdir>>>| Overwrites the default Configuration directory.
|
|
|
|
|
| | Default is <<<${HADOOP_HOME}/conf>>>.
|
|
|
|
|
*-----------------------+---------------+
|
|
|
|
|
| <<<--loglevel loglevel>>>| Overwrites the log level. Valid log levels are
|
|
|
|
|
| | FATAL, ERROR, WARN, INFO, DEBUG, and TRACE.
|
|
|
|
|
| | Default is INFO.
|
|
|
|
|
*-----------------------+---------------+
|
|
|
|
|
| GENERIC_OPTIONS | The common set of options supported by multiple
|
|
|
|
|
| | commands. Full list is
|
|
|
|
|
| | {{{../hadoop-common/CommandsManual.html#Generic_Options}here}}.
|
|
|
|
|
*-----------------------+---------------+
|
|
|
|
|
| COMMAND_OPTIONS | Various commands with their options are described in
|
|
|
|
|
| | the following sections. The commands have been
|
|
|
|
|
| | grouped into {{{User Commands}}} and
|
|
|
|
|
| | {{{Administration Commands}}}.
|
|
|
|
|
*-----------------------+---------------+
|
|
|
|
|
*---------------+--------------+
|
|
|
|
|
|| COMMAND_OPTIONS || Description |
|
|
|
|
|
*-------------------------+-------------+
|
|
|
|
|
| SHELL_OPTIONS | The common set of shell options. These are documented on the {{{../../hadoop-project-dist/hadoop-common/CommandsManual.html#Shell Options}Commands Manual}} page.
|
|
|
|
|
*-------------------------+----+
|
|
|
|
|
| GENERIC_OPTIONS | The common set of options supported by multiple commands. See the Hadoop {{{../../hadoop-project-dist/hadoop-common/CommandsManual.html#Generic Options}Commands Manual}} for more information.
|
|
|
|
|
*------------------+---------------+
|
|
|
|
|
| COMMAND COMMAND_OPTIONS | Various commands with their options are described
|
|
|
|
|
| | in the following sections. The commands have been
|
|
|
|
|
| | grouped into {{User Commands}} and
|
|
|
|
|
| | {{Administration Commands}}.
|
|
|
|
|
*-------------------------+--------------+
|
|
|
|
|
|
|
|
|
|
* User Commands
|
|
|
|
|
* {User Commands}
|
|
|
|
|
|
|
|
|
|
Commands useful for users of a hadoop cluster.
|
|
|
|
|
|
|
|
|
|
** <<<classpath>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs classpath>>>
|
|
|
|
|
|
|
|
|
|
Prints the class path needed to get the Hadoop jar and the required libraries
|
|
|
|
|
|
|
|
|
|
** <<<dfs>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs dfs [GENERIC_OPTIONS] [COMMAND_OPTIONS]>>>
|
|
|
|
|
Usage: <<<hdfs dfs [COMMAND [COMMAND_OPTIONS]]>>>
|
|
|
|
|
|
|
|
|
|
Run a filesystem command on the file system supported in Hadoop.
|
|
|
|
|
The various COMMAND_OPTIONS can be found at
|
|
|
|
@ -66,43 +64,46 @@ HDFS Commands Guide
|
|
|
|
|
|
|
|
|
|
** <<<fetchdt>>>
|
|
|
|
|
|
|
|
|
|
Gets Delegation Token from a NameNode.
|
|
|
|
|
See {{{./HdfsUserGuide.html#fetchdt}fetchdt}} for more info.
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs fetchdt [GENERIC_OPTIONS]
|
|
|
|
|
[--webservice <namenode_http_addr>] <path> >>>
|
|
|
|
|
Usage: <<<hdfs fetchdt [--webservice <namenode_http_addr>] <path> >>>
|
|
|
|
|
|
|
|
|
|
*------------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------------+---------------------------------------------+
|
|
|
|
|
| <fileName> | File name to store the token into.
|
|
|
|
|
*------------------------------+---------------------------------------------+
|
|
|
|
|
| --webservice <https_address> | use http protocol instead of RPC
|
|
|
|
|
*------------------------------+---------------------------------------------+
|
|
|
|
|
| <fileName> | File name to store the token into.
|
|
|
|
|
*------------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Gets Delegation Token from a NameNode.
|
|
|
|
|
See {{{./HdfsUserGuide.html#fetchdt}fetchdt}} for more info.
|
|
|
|
|
|
|
|
|
|
** <<<fsck>>>
|
|
|
|
|
|
|
|
|
|
Runs a HDFS filesystem checking utility.
|
|
|
|
|
See {{{./HdfsUserGuide.html#fsck}fsck}} for more info.
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs fsck [GENERIC_OPTIONS] <path>
|
|
|
|
|
---
|
|
|
|
|
hdfs fsck <path>
|
|
|
|
|
[-list-corruptfileblocks |
|
|
|
|
|
[-move | -delete | -openforwrite]
|
|
|
|
|
[-files [-blocks [-locations | -racks]]]
|
|
|
|
|
[-includeSnapshots] [-showprogress]>>>
|
|
|
|
|
[-includeSnapshots] [-showprogress]
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| <path> | Start checking from this path.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -move | Move corrupted files to /lost+found.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -delete | Delete corrupted files.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -files | Print out files being checked.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -openforwrite | Print out files opened for write.
|
|
|
|
|
| -files -blocks | Print out the block report
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -files -blocks -locations | Print out locations for every block.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -files -blocks -racks | Print out network topology for data-node locations.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| | Include snapshot data if the given path
|
|
|
|
|
| -includeSnapshots | indicates a snapshottable directory or
|
|
|
|
@ -111,52 +112,259 @@ HDFS Commands Guide
|
|
|
|
|
| -list-corruptfileblocks| Print out list of missing blocks and
|
|
|
|
|
| | files they belong to.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -blocks | Print out block report.
|
|
|
|
|
| -move | Move corrupted files to /lost+found.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -locations | Print out locations for every block.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -racks | Print out network topology for data-node locations.
|
|
|
|
|
| -openforwrite | Print out files opened for write.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -showprogress | Print out dots for progress in output. Default is OFF
|
|
|
|
|
| | (no progress).
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Runs the HDFS filesystem checking utility.
|
|
|
|
|
See {{{./HdfsUserGuide.html#fsck}fsck}} for more info.
|
|
|
|
|
|
|
|
|
|
** <<<getconf>>>
|
|
|
|
|
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
|
|
---
|
|
|
|
|
hdfs getconf -namenodes
|
|
|
|
|
hdfs getconf -secondaryNameNodes
|
|
|
|
|
hdfs getconf -backupNodes
|
|
|
|
|
hdfs getconf -includeFile
|
|
|
|
|
hdfs getconf -excludeFile
|
|
|
|
|
hdfs getconf -nnRpcAddresses
|
|
|
|
|
hdfs getconf -confKey [key]
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -namenodes | gets list of namenodes in the cluster.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -secondaryNameNodes | gets list of secondary namenodes in the cluster.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -backupNodes | gets list of backup nodes in the cluster.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -includeFile | gets the include file path that defines the datanodes that can join the cluster.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -excludeFile | gets the exclude file path that defines the datanodes that need to decommissioned.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -nnRpcAddresses | gets the namenode rpc addresses
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -confKey [key] | gets a specific key from the configuration
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Gets configuration information from the configuration directory, post-processing.
|
|
|
|
|
|
|
|
|
|
** <<<groups>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs groups [username ...]>>>
|
|
|
|
|
|
|
|
|
|
Returns the group information given one or more usernames.
|
|
|
|
|
|
|
|
|
|
** <<<lsSnapshottableDir>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs lsSnapshottableDir [-help]>>>
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -help | print help
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Get the list of snapshottable directories. When this is run as a super user,
|
|
|
|
|
it returns all snapshottable directories. Otherwise it returns those directories
|
|
|
|
|
that are owned by the current user.
|
|
|
|
|
|
|
|
|
|
** <<<jmxget>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs jmxget [-localVM ConnectorURL | -port port | -server mbeanserver | -service service]>>>
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -help | print help
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -localVM ConnectorURL | connect to the VM on the same machine
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -port <mbean server port> | specify mbean server port, if missing
|
|
|
|
|
| | it will try to connect to MBean Server in
|
|
|
|
|
| | the same VM
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -service | specify jmx service, either DataNode or NameNode, the default
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Dump JMX information from a service.
|
|
|
|
|
|
|
|
|
|
** <<<oev>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE>>>
|
|
|
|
|
|
|
|
|
|
*** Required command line arguments:
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|-i,--inputFile <arg> | edits file to process, xml (case
|
|
|
|
|
| insensitive) extension means XML format,
|
|
|
|
|
| any other filename means binary format
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -o,--outputFile <arg> | Name of output file. If the specified
|
|
|
|
|
| file exists, it will be overwritten,
|
|
|
|
|
| format of the file is determined
|
|
|
|
|
| by -p option
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
*** Optional command line arguments:
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -f,--fix-txids | Renumber the transaction IDs in the input,
|
|
|
|
|
| so that there are no gaps or invalid transaction IDs.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -h,--help | Display usage information and exit
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -r,--recover | When reading binary edit logs, use recovery
|
|
|
|
|
| mode. This will give you the chance to skip
|
|
|
|
|
| corrupt parts of the edit log.
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -p,--processor <arg> | Select which type of processor to apply
|
|
|
|
|
| against image file, currently supported
|
|
|
|
|
| processors are: binary (native binary format
|
|
|
|
|
| that Hadoop uses), xml (default, XML
|
|
|
|
|
| format), stats (prints statistics about
|
|
|
|
|
| edits file)
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -v,--verbose | More verbose output, prints the input and
|
|
|
|
|
| output filenames, for processors that write
|
|
|
|
|
| to a file, also output to screen. On large
|
|
|
|
|
| image files this will dramatically increase
|
|
|
|
|
| processing time (default is false).
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Hadoop offline edits viewer.
|
|
|
|
|
|
|
|
|
|
** <<<oiv>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs oiv [OPTIONS] -i INPUT_FILE>>>
|
|
|
|
|
|
|
|
|
|
*** Required command line arguments:
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|-i,--inputFile <arg> | edits file to process, xml (case
|
|
|
|
|
| insensitive) extension means XML format,
|
|
|
|
|
| any other filename means binary format
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
*** Optional command line arguments:
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -h,--help | Display usage information and exit
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -o,--outputFile <arg> | Name of output file. If the specified
|
|
|
|
|
| file exists, it will be overwritten,
|
|
|
|
|
| format of the file is determined
|
|
|
|
|
| by -p option
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -p,--processor <arg> | Select which type of processor to apply
|
|
|
|
|
| against image file, currently supported
|
|
|
|
|
| processors are: binary (native binary format
|
|
|
|
|
| that Hadoop uses), xml (default, XML
|
|
|
|
|
| format), stats (prints statistics about
|
|
|
|
|
| edits file)
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Hadoop Offline Image Viewer for newer image files.
|
|
|
|
|
|
|
|
|
|
** <<<oiv_legacy>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs oiv_legacy [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE>>>
|
|
|
|
|
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -h,--help | Display usage information and exit
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|-i,--inputFile <arg> | edits file to process, xml (case
|
|
|
|
|
| insensitive) extension means XML format,
|
|
|
|
|
| any other filename means binary format
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
| -o,--outputFile <arg> | Name of output file. If the specified
|
|
|
|
|
| file exists, it will be overwritten,
|
|
|
|
|
| format of the file is determined
|
|
|
|
|
| by -p option
|
|
|
|
|
*------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Hadoop offline image viewer for older versions of Hadoop.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
** <<<snapshotDiff>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs snapshotDiff <path> <fromSnapshot> <toSnapshot> >>>
|
|
|
|
|
|
|
|
|
|
Determine the difference between HDFS snapshots. See the
|
|
|
|
|
{{{./HdfsSnapshots.html#Get_Snapshots_Difference_Report}HDFS Snapshot Documentation}} for more information.
|
|
|
|
|
|
|
|
|
|
** <<<version>>>
|
|
|
|
|
|
|
|
|
|
Prints the version.
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs version>>>
|
|
|
|
|
|
|
|
|
|
Prints the version.
|
|
|
|
|
|
|
|
|
|
* Administration Commands
|
|
|
|
|
|
|
|
|
|
Commands useful for administrators of a hadoop cluster.
|
|
|
|
|
|
|
|
|
|
** <<<balancer>>>
|
|
|
|
|
|
|
|
|
|
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
|
|
|
|
|
to stop the rebalancing process. See
|
|
|
|
|
{{{./HdfsUserGuide.html#Balancer}Balancer}} for more details.
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs balancer [-threshold <threshold>] [-policy <policy>]>>>
|
|
|
|
|
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION | Description
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
| -threshold <threshold> | Percentage of disk capacity. This overwrites the
|
|
|
|
|
| | default threshold.
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
| -policy <policy> | <<<datanode>>> (default): Cluster is balanced if
|
|
|
|
|
| | each datanode is balanced. \
|
|
|
|
|
| | <<<blockpool>>>: Cluster is balanced if each block
|
|
|
|
|
| | pool in each datanode is balanced.
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
| -threshold <threshold> | Percentage of disk capacity. This overwrites the
|
|
|
|
|
| | default threshold.
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Runs a cluster balancing utility. An administrator can simply press Ctrl-C
|
|
|
|
|
to stop the rebalancing process. See
|
|
|
|
|
{{{./HdfsUserGuide.html#Balancer}Balancer}} for more details.
|
|
|
|
|
|
|
|
|
|
Note that the <<<blockpool>>> policy is more strict than the <<<datanode>>>
|
|
|
|
|
policy.
|
|
|
|
|
|
|
|
|
|
** <<<datanode>>>
|
|
|
|
|
** <<<cacheadmin>>>
|
|
|
|
|
|
|
|
|
|
Runs a HDFS datanode.
|
|
|
|
|
Usage: <<<hdfs cacheadmin -addDirective -path <path> -pool <pool-name> [-force] [-replication <replication>] [-ttl <time-to-live>]>>>
|
|
|
|
|
|
|
|
|
|
See the {{{./CentralizedCacheManagement.html#cacheadmin_command-line_interface}HDFS Cache Administration Documentation}} for more information.
|
|
|
|
|
|
|
|
|
|
** <<<crypto>>>
|
|
|
|
|
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
|
|
---
|
|
|
|
|
hdfs crypto -createZone -keyName <keyName> -path <path>
|
|
|
|
|
hdfs crypto -help <command-name>
|
|
|
|
|
hdfs crypto -listZones
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
See the {{{./TransparentEncryption.html#crypto_command-line_interface}HDFS Transparent Encryption Documentation}} for more information.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
** <<<datanode>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs datanode [-regular | -rollback | -rollingupgrace rollback]>>>
|
|
|
|
|
|
|
|
|
@ -172,12 +380,14 @@ HDFS Commands Guide
|
|
|
|
|
| -rollingupgrade rollback | Rollback a rolling upgrade operation.
|
|
|
|
|
*-----------------+-----------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Runs a HDFS datanode.
|
|
|
|
|
|
|
|
|
|
** <<<dfsadmin>>>
|
|
|
|
|
|
|
|
|
|
Runs a HDFS dfsadmin client.
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
|
|
+------------------------------------------+
|
|
|
|
|
Usage: hdfs dfsadmin [GENERIC_OPTIONS]
|
|
|
|
|
------------------------------------------
|
|
|
|
|
hdfs dfsadmin [GENERIC_OPTIONS]
|
|
|
|
|
[-report [-live] [-dead] [-decommissioning]]
|
|
|
|
|
[-safemode enter | leave | get | wait]
|
|
|
|
|
[-saveNamespace]
|
|
|
|
@ -210,7 +420,7 @@ HDFS Commands Guide
|
|
|
|
|
[-getDatanodeInfo <datanode_host:ipc_port>]
|
|
|
|
|
[-triggerBlockReport [-incremental] <datanode_host:ipc_port>]
|
|
|
|
|
[-help [cmd]]
|
|
|
|
|
+------------------------------------------+
|
|
|
|
|
------------------------------------------
|
|
|
|
|
|
|
|
|
|
*-----------------+-----------------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
@ -323,11 +533,11 @@ HDFS Commands Guide
|
|
|
|
|
*-----------------+-----------------------------------------------------------+
|
|
|
|
|
| -allowSnapshot \<snapshotDir\> | Allowing snapshots of a directory to be
|
|
|
|
|
| created. If the operation completes successfully, the
|
|
|
|
|
| directory becomes snapshottable.
|
|
|
|
|
| directory becomes snapshottable. See the {{{./HdfsSnapshots.html}HDFS Snapshot Documentation}} for more information.
|
|
|
|
|
*-----------------+-----------------------------------------------------------+
|
|
|
|
|
| -disallowSnapshot \<snapshotDir\> | Disallowing snapshots of a directory to
|
|
|
|
|
| be created. All snapshots of the directory must be deleted
|
|
|
|
|
| before disallowing snapshots.
|
|
|
|
|
| before disallowing snapshots. See the {{{./HdfsSnapshots.html}HDFS Snapshot Documentation}} for more information.
|
|
|
|
|
*-----------------+-----------------------------------------------------------+
|
|
|
|
|
| -fetchImage \<local directory\> | Downloads the most recent fsimage from the
|
|
|
|
|
| NameNode and saves it in the specified local directory.
|
|
|
|
@ -351,30 +561,68 @@ HDFS Commands Guide
|
|
|
|
|
| is specified.
|
|
|
|
|
*-----------------+-----------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
** <<<mover>>>
|
|
|
|
|
Runs a HDFS dfsadmin client.
|
|
|
|
|
|
|
|
|
|
Runs the data migration utility.
|
|
|
|
|
See {{{./ArchivalStorage.html#Mover_-_A_New_Data_Migration_Tool}Mover}} for more details.
|
|
|
|
|
** <<<haadmin>>>
|
|
|
|
|
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
|
|
---
|
|
|
|
|
hdfs haadmin -checkHealth <serviceId>
|
|
|
|
|
hdfs haadmin -failover [--forcefence] [--forceactive] <serviceId> <serviceId>
|
|
|
|
|
hdfs haadmin -getServiceState <serviceId>
|
|
|
|
|
hdfs haadmin -help <command>
|
|
|
|
|
hdfs haadmin -transitionToActive <serviceId> [--forceactive]
|
|
|
|
|
hdfs haadmin -transitionToStandby <serviceId>
|
|
|
|
|
---
|
|
|
|
|
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -checkHealth | check the health of the given NameNode
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -failover | initiate a failover between two NameNodes
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -getServiceState | determine whether the given NameNode is Active or Standby
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -transitionToActive | transition the state of the given NameNode to Active (Warning: No fencing is done)
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -transitionToStandby | transition the state of the given NameNode to Standby (Warning: No fencing is done)
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
See {{{./HDFSHighAvailabilityWithNFS.html#Administrative_commands}HDFS HA with NFS}} or
|
|
|
|
|
{{{./HDFSHighAvailabilityWithQJM.html#Administrative_commands}HDFS HA with QJM}} for more
|
|
|
|
|
information on this command.
|
|
|
|
|
|
|
|
|
|
** <<<journalnode>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs journalnode>>>
|
|
|
|
|
|
|
|
|
|
This comamnd starts a journalnode for use with {{{./HDFSHighAvailabilityWithQJM.html#Administrative_commands}HDFS HA with QJM}}.
|
|
|
|
|
|
|
|
|
|
** <<<mover>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs mover [-p <files/dirs> | -f <local file name>]>>>
|
|
|
|
|
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -p \<files/dirs\> | Specify a space separated list of HDFS files/dirs to migrate.
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -f \<local file\> | Specify a local file containing a list of HDFS files/dirs to migrate.
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
| -p \<files/dirs\> | Specify a space separated list of HDFS files/dirs to migrate.
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Runs the data migration utility.
|
|
|
|
|
See {{{./ArchivalStorage.html#Mover_-_A_New_Data_Migration_Tool}Mover}} for more details.
|
|
|
|
|
|
|
|
|
|
Note that, when both -p and -f options are omitted, the default path is the root directory.
|
|
|
|
|
|
|
|
|
|
** <<<namenode>>>
|
|
|
|
|
|
|
|
|
|
Runs the namenode. More info about the upgrade, rollback and finalize is at
|
|
|
|
|
{{{./HdfsUserGuide.html#Upgrade_and_Rollback}Upgrade Rollback}}.
|
|
|
|
|
Usage:
|
|
|
|
|
|
|
|
|
|
+------------------------------------------+
|
|
|
|
|
Usage: hdfs namenode [-backup] |
|
|
|
|
|
------------------------------------------
|
|
|
|
|
hdfs namenode [-backup] |
|
|
|
|
|
[-checkpoint] |
|
|
|
|
|
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
|
|
|
|
|
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
|
|
|
|
@ -387,7 +635,7 @@ HDFS Commands Guide
|
|
|
|
|
[-bootstrapStandby] |
|
|
|
|
|
[-recover [-force] ] |
|
|
|
|
|
[-metadataVersion ]
|
|
|
|
|
+------------------------------------------+
|
|
|
|
|
------------------------------------------
|
|
|
|
|
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
@ -443,11 +691,23 @@ HDFS Commands Guide
|
|
|
|
|
| metadata versions of the software and the image.
|
|
|
|
|
*--------------------+--------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
** <<<secondarynamenode>>>
|
|
|
|
|
Runs the namenode. More info about the upgrade, rollback and finalize is at
|
|
|
|
|
{{{./HdfsUserGuide.html#Upgrade_and_Rollback}Upgrade Rollback}}.
|
|
|
|
|
|
|
|
|
|
Runs the HDFS secondary namenode.
|
|
|
|
|
See {{{./HdfsUserGuide.html#Secondary_NameNode}Secondary Namenode}}
|
|
|
|
|
for more info.
|
|
|
|
|
|
|
|
|
|
** <<<nfs3>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs nfs3>>>
|
|
|
|
|
|
|
|
|
|
This comamnd starts the NFS3 gateway for use with the {{{./HdfsNfsGateway.html#Start_and_stop_NFS_gateway_service}HDFS NFS3 Service}}.
|
|
|
|
|
|
|
|
|
|
** <<<portmap>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs portmap>>>
|
|
|
|
|
|
|
|
|
|
This comamnd starts the RPC portmap for use with the {{{./HdfsNfsGateway.html#Start_and_stop_NFS_gateway_service}HDFS NFS3 Service}}.
|
|
|
|
|
|
|
|
|
|
** <<<secondarynamenode>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs secondarynamenode [-checkpoint [force]] | [-format] |
|
|
|
|
|
[-geteditsize]>>>
|
|
|
|
@ -465,6 +725,33 @@ HDFS Commands Guide
|
|
|
|
|
| the NameNode.
|
|
|
|
|
*----------------------+------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Runs the HDFS secondary namenode.
|
|
|
|
|
See {{{./HdfsUserGuide.html#Secondary_NameNode}Secondary Namenode}}
|
|
|
|
|
for more info.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
** <<<storagepolicies>>>
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs storagepolicies>>>
|
|
|
|
|
|
|
|
|
|
Lists out all storage policies. See the {{{./ArchivalStorage.html}HDFS Storage Policy Documentation}} for more information.
|
|
|
|
|
|
|
|
|
|
** <<<zkfc>>>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs zkfc [-formatZK [-force] [-nonInteractive]]>>>
|
|
|
|
|
|
|
|
|
|
*----------------------+------------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION || Description
|
|
|
|
|
*----------------------+------------------------------------------------------+
|
|
|
|
|
| -formatZK | Format the Zookeeper instance
|
|
|
|
|
*----------------------+------------------------------------------------------+
|
|
|
|
|
| -h | Display help
|
|
|
|
|
*----------------------+------------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
This comamnd starts a Zookeeper Failover Controller process for use with {{{./HDFSHighAvailabilityWithQJM.html#Administrative_commands}HDFS HA with QJM}}.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
* Debug Commands
|
|
|
|
|
|
|
|
|
|
Useful commands to help administrators debug HDFS issues, like validating
|
|
|
|
@ -472,30 +759,25 @@ HDFS Commands Guide
|
|
|
|
|
|
|
|
|
|
** <<<verify>>>
|
|
|
|
|
|
|
|
|
|
Verify HDFS metadata and block files. If a block file is specified, we
|
|
|
|
|
will verify that the checksums in the metadata file match the block
|
|
|
|
|
file.
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs dfs verify [-meta <metadata-file>] [-block <block-file>]>>>
|
|
|
|
|
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
|| COMMAND_OPTION | Description
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
| -meta <metadata-file> | Absolute path for the metadata file on the local file
|
|
|
|
|
| | system of the data node.
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
| -block <block-file> | Optional parameter to specify the absolute path for
|
|
|
|
|
| | the block file on the local file system of the data
|
|
|
|
|
| | node.
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
| -meta <metadata-file> | Absolute path for the metadata file on the local file
|
|
|
|
|
| | system of the data node.
|
|
|
|
|
*------------------------+----------------------------------------------------+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Verify HDFS metadata and block files. If a block file is specified, we
|
|
|
|
|
will verify that the checksums in the metadata file match the block
|
|
|
|
|
file.
|
|
|
|
|
|
|
|
|
|
** <<<recoverLease>>>
|
|
|
|
|
|
|
|
|
|
Recover the lease on the specified path. The path must reside on an
|
|
|
|
|
HDFS filesystem. The default number of retries is 1.
|
|
|
|
|
|
|
|
|
|
Usage: <<<hdfs dfs recoverLease [-path <path>] [-retries <num-retries>]>>>
|
|
|
|
|
|
|
|
|
|
*-------------------------------+--------------------------------------------+
|
|
|
|
@ -507,3 +789,6 @@ HDFS Commands Guide
|
|
|
|
|
| | recoverLease. The default number of retries
|
|
|
|
|
| | is 1.
|
|
|
|
|
*-------------------------------+---------------------------------------------+
|
|
|
|
|
|
|
|
|
|
Recover the lease on the specified path. The path must reside on an
|
|
|
|
|
HDFS filesystem. The default number of retries is 1.
|