1817 lines
97 KiB
HTML
1817 lines
97 KiB
HTML
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
|
|
<!--
|
|
| Generated by Apache Maven Doxia at 2023-02-18
|
|
| Rendered using Apache Maven Stylus Skin 1.5
|
|
-->
|
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
|
<head>
|
|
<title>Apache Hadoop 3.4.0-SNAPSHOT – HDFS Commands Guide</title>
|
|
<style type="text/css" media="all">
|
|
@import url("./css/maven-base.css");
|
|
@import url("./css/maven-theme.css");
|
|
@import url("./css/site.css");
|
|
</style>
|
|
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
|
|
<meta name="Date-Revision-yyyymmdd" content="20230218" />
|
|
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
|
</head>
|
|
<body class="composite">
|
|
<div id="banner">
|
|
<a href="http://hadoop.apache.org/" id="bannerLeft">
|
|
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
|
|
</a>
|
|
<a href="http://www.apache.org/" id="bannerRight">
|
|
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
|
|
</a>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
<div id="breadcrumbs">
|
|
|
|
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
|
|
|
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
|
|
|
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
|
|
|
|
| Last Published: 2023-02-18
|
|
| Version: 3.4.0-SNAPSHOT
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
<div id="leftColumn">
|
|
<div id="navcolumn">
|
|
|
|
<h5>General</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../index.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Common</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
|
|
</li>
|
|
</ul>
|
|
<h5>HDFS</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
|
|
</li>
|
|
</ul>
|
|
<h5>MapReduce</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
|
|
</li>
|
|
</ul>
|
|
<h5>MapReduce REST APIs</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN REST APIs</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
|
|
</li>
|
|
</ul>
|
|
<h5>YARN Service</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Hadoop Compatible File Systems</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Auth</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-auth/index.html">Overview</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-auth/Examples.html">Examples</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Tools</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Reference</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../api/index.html">Java API docs</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
|
|
</li>
|
|
</ul>
|
|
<h5>Configuration</h5>
|
|
<ul>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
|
|
</li>
|
|
<li class="none">
|
|
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
|
|
</li>
|
|
</ul>
|
|
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
|
|
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
|
|
</a>
|
|
|
|
</div>
|
|
</div>
|
|
<div id="bodyColumn">
|
|
<div id="contentBox">
|
|
<!---
|
|
Licensed under the Apache License, Version 2.0 (the "License");
|
|
you may not use this file except in compliance with the License.
|
|
You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software
|
|
distributed under the License is distributed on an "AS IS" BASIS,
|
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
See the License for the specific language governing permissions and
|
|
limitations under the License. See accompanying LICENSE file.
|
|
-->
|
|
<h1>HDFS Commands Guide</h1>
|
|
<ul>
|
|
<li><a href="#Overview">Overview</a></li>
|
|
<li><a href="#User_Commands">User Commands</a>
|
|
<ul>
|
|
<li><a href="#classpath">classpath</a></li>
|
|
<li><a href="#dfs">dfs</a></li>
|
|
<li><a href="#envvars">envvars</a></li>
|
|
<li><a href="#fetchdt">fetchdt</a></li>
|
|
<li><a href="#fsck">fsck</a></li>
|
|
<li><a href="#getconf">getconf</a></li>
|
|
<li><a href="#groups">groups</a></li>
|
|
<li><a href="#httpfs">httpfs</a></li>
|
|
<li><a href="#lsSnapshottableDir">lsSnapshottableDir</a></li>
|
|
<li><a href="#lsSnapshot">lsSnapshot</a></li>
|
|
<li><a href="#jmxget">jmxget</a></li>
|
|
<li><a href="#oev">oev</a></li>
|
|
<li><a href="#oiv">oiv</a></li>
|
|
<li><a href="#oiv_legacy">oiv_legacy</a></li>
|
|
<li><a href="#snapshotDiff">snapshotDiff</a></li>
|
|
<li><a href="#version">version</a></li></ul></li>
|
|
<li><a href="#Administration_Commands">Administration Commands</a>
|
|
<ul>
|
|
<li><a href="#balancer">balancer</a></li>
|
|
<li><a href="#cacheadmin">cacheadmin</a></li>
|
|
<li><a href="#crypto">crypto</a></li>
|
|
<li><a href="#datanode">datanode</a></li>
|
|
<li><a href="#dfsadmin">dfsadmin</a></li>
|
|
<li><a href="#dfsrouter">dfsrouter</a></li>
|
|
<li><a href="#dfsrouteradmin">dfsrouteradmin</a></li>
|
|
<li><a href="#diskbalancer">diskbalancer</a></li>
|
|
<li><a href="#ec">ec</a></li>
|
|
<li><a href="#haadmin">haadmin</a></li>
|
|
<li><a href="#journalnode">journalnode</a></li>
|
|
<li><a href="#mover">mover</a></li>
|
|
<li><a href="#namenode">namenode</a></li>
|
|
<li><a href="#nfs3">nfs3</a></li>
|
|
<li><a href="#portmap">portmap</a></li>
|
|
<li><a href="#secondarynamenode">secondarynamenode</a></li>
|
|
<li><a href="#storagepolicies">storagepolicies</a></li>
|
|
<li><a href="#zkfc">zkfc</a></li></ul></li>
|
|
<li><a href="#Debug_Commands">Debug Commands</a>
|
|
<ul>
|
|
<li><a href="#verifyMeta">verifyMeta</a></li>
|
|
<li><a href="#computeMeta">computeMeta</a></li>
|
|
<li><a href="#recoverLease">recoverLease</a></li>
|
|
<li><a href="#verifyEC">verifyEC</a></li></ul></li>
|
|
<li><a href="#dfsadmin_with_ViewFsOverloadScheme">dfsadmin with ViewFsOverloadScheme</a></li></ul>
|
|
<section>
|
|
<h2><a name="Overview"></a>Overview</h2>
|
|
<p>All HDFS commands are invoked by the <code>bin/hdfs</code> script. Running the hdfs script without any arguments prints the description for all commands.</p>
|
|
<p>Usage: <code>hdfs [SHELL_OPTIONS] COMMAND [GENERIC_OPTIONS] [COMMAND_OPTIONS]</code></p>
|
|
<p>Hadoop has an option parsing framework that employs parsing generic options as well as running classes.</p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTIONS </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> SHELL_OPTIONS </td>
|
|
<td align="left"> The common set of shell options. These are documented on the <a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html#Shell_Options">Commands Manual</a> page. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> GENERIC_OPTIONS </td>
|
|
<td align="left"> The common set of options supported by multiple commands. See the Hadoop <a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html#Generic_Options">Commands Manual</a> for more information. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> COMMAND_OPTIONS </td>
|
|
<td align="left"> Various commands with their options are described in the following sections. The commands have been grouped into <a href="#User_Commands">User Commands</a> and <a href="#Administration_Commands">Administration Commands</a>. </td></tr>
|
|
</tbody>
|
|
</table></section><section>
|
|
<h2><a name="User_Commands"></a>User Commands</h2>
|
|
<p>Commands useful for users of a hadoop cluster.</p><section>
|
|
<h3><a name="classpath"></a><code>classpath</code></h3>
|
|
<p>Usage: <code>hdfs classpath [--glob |--jar <path> |-h |--help]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>--glob</code> </td>
|
|
<td align="left"> expand wildcards </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>--jar</code> <i>path</i> </td>
|
|
<td align="left"> write classpath as manifest in jar named <i>path</i> </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-h</code>, <code>--help</code> </td>
|
|
<td align="left"> print help </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries. Additional options print the classpath after wildcard expansion or write the classpath into the manifest of a jar file. The latter is useful in environments where wildcards cannot be used and the expanded classpath exceeds the maximum supported command line length.</p></section><section>
|
|
<h3><a name="dfs"></a><code>dfs</code></h3>
|
|
<p>Usage: <code>hdfs dfs [COMMAND [COMMAND_OPTIONS]]</code></p>
|
|
<p>Run a filesystem command on the file system supported in Hadoop. The various COMMAND_OPTIONS can be found at <a href="../hadoop-common/FileSystemShell.html">File System Shell Guide</a>.</p></section><section>
|
|
<h3><a name="envvars"></a><code>envvars</code></h3>
|
|
<p>Usage: <code>hdfs envvars</code></p>
|
|
<p>display computed Hadoop environment variables.</p></section><section>
|
|
<h3><a name="fetchdt"></a><code>fetchdt</code></h3>
|
|
<p>Usage: <code>hdfs fetchdt <opts> <token_file_path></code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>--webservice</code> <i>NN_Url</i> </td>
|
|
<td align="left"> Url to contact NN on (starts with http or https)</td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>--renewer</code> <i>name</i> </td>
|
|
<td align="left"> Name of the delegation token renewer </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>--cancel</code> </td>
|
|
<td align="left"> Cancel the delegation token </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>--renew</code> </td>
|
|
<td align="left"> Renew the delegation token. Delegation token must have been fetched using the –renewer <i>name</i> option.</td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>--print</code> </td>
|
|
<td align="left"> Print the delegation token </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <i>token_file_path</i> </td>
|
|
<td align="left"> File path to store the token into. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Gets Delegation Token from a NameNode. See <a href="./HdfsUserGuide.html#fetchdt">fetchdt</a> for more info.</p></section><section>
|
|
<h3><a name="fsck"></a><code>fsck</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs fsck <path>
|
|
[-list-corruptfileblocks |
|
|
[-move | -delete | -openforwrite]
|
|
[-files [-blocks [-locations | -racks | -replicaDetails | -upgradedomains]]]
|
|
[-includeSnapshots] [-showprogress]
|
|
[-storagepolicies] [-maintenance]
|
|
[-blockId <blk_Id>] [-replicate]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <i>path</i> </td>
|
|
<td align="left"> Start checking from this path. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-delete</code> </td>
|
|
<td align="left"> Delete corrupted files. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-files</code> </td>
|
|
<td align="left"> Print out files being checked. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-files</code> <code>-blocks</code> </td>
|
|
<td align="left"> Print out the block report </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-files</code> <code>-blocks</code> <code>-locations</code> </td>
|
|
<td align="left"> Print out locations for every block. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-files</code> <code>-blocks</code> <code>-racks</code> </td>
|
|
<td align="left"> Print out network topology for data-node locations. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-files</code> <code>-blocks</code> <code>-replicaDetails</code> </td>
|
|
<td align="left"> Print out each replica details. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-files</code> <code>-blocks</code> <code>-upgradedomains</code> </td>
|
|
<td align="left"> Print out upgrade domains for every block. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-includeSnapshots</code> </td>
|
|
<td align="left"> Include snapshot data if the given path indicates a snapshottable directory or there are snapshottable directories under it. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-list-corruptfileblocks</code> </td>
|
|
<td align="left"> Print out list of missing blocks and files they belong to. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-move</code> </td>
|
|
<td align="left"> Move corrupted files to /lost+found. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-openforwrite</code> </td>
|
|
<td align="left"> Print out files opened for write. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-showprogress</code> </td>
|
|
<td align="left"> Deprecated. A dot is print every 100 files processed with or without this switch. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-storagepolicies</code> </td>
|
|
<td align="left"> Print out storage policy summary for the blocks. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-maintenance</code> </td>
|
|
<td align="left"> Print out maintenance state node details. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-blockId</code> </td>
|
|
<td align="left"> Print out information about the block. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-replicate</code> </td>
|
|
<td align="left"> Initiate replication work to make mis-replicated blocks satisfy block placement policy. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs the HDFS filesystem checking utility. See <a href="./HdfsUserGuide.html#fsck">fsck</a> for more info.</p></section><section>
|
|
<h3><a name="getconf"></a><code>getconf</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs getconf -namenodes
|
|
hdfs getconf -secondaryNameNodes
|
|
hdfs getconf -backupNodes
|
|
hdfs getconf -journalNodes
|
|
hdfs getconf -includeFile
|
|
hdfs getconf -excludeFile
|
|
hdfs getconf -nnRpcAddresses
|
|
hdfs getconf -confKey [key]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-namenodes</code> </td>
|
|
<td align="left"> gets list of namenodes in the cluster. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-secondaryNameNodes</code> </td>
|
|
<td align="left"> gets list of secondary namenodes in the cluster. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-backupNodes</code> </td>
|
|
<td align="left"> gets list of backup nodes in the cluster. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-journalNodes</code> </td>
|
|
<td align="left"> gets list of journal nodes in the cluster. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-includeFile</code> </td>
|
|
<td align="left"> gets the include file path that defines the datanodes that can join the cluster. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-excludeFile</code> </td>
|
|
<td align="left"> gets the exclude file path that defines the datanodes that need to decommissioned. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-nnRpcAddresses</code> </td>
|
|
<td align="left"> gets the namenode rpc addresses </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-confKey</code> [key] </td>
|
|
<td align="left"> gets a specific key from the configuration </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Gets configuration information from the configuration directory, post-processing.</p></section><section>
|
|
<h3><a name="groups"></a><code>groups</code></h3>
|
|
<p>Usage: <code>hdfs groups [username ...]</code></p>
|
|
<p>Returns the group information given one or more usernames.</p></section><section>
|
|
<h3><a name="httpfs"></a><code>httpfs</code></h3>
|
|
<p>Usage: <code>hdfs httpfs</code></p>
|
|
<p>Run HttpFS server, the HDFS HTTP Gateway.</p></section><section>
|
|
<h3><a name="lsSnapshottableDir"></a><code>lsSnapshottableDir</code></h3>
|
|
<p>Usage: <code>hdfs lsSnapshottableDir [-help]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-help</code> </td>
|
|
<td align="left"> print help </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Get the list of snapshottable directories. When this is run as a super user, it returns all snapshottable directories. Otherwise it returns those directories that are owned by the current user.</p></section><section>
|
|
<h3><a name="lsSnapshot"></a><code>lsSnapshot</code></h3>
|
|
<p>Usage: <code>hdfs lsSnapshot [-help]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-help</code> </td>
|
|
<td align="left"> print help </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Get the list of snapshots for a snapshottable directory.</p></section><section>
|
|
<h3><a name="jmxget"></a><code>jmxget</code></h3>
|
|
<p>Usage: <code>hdfs jmxget [-localVM ConnectorURL | -port port | -server mbeanserver | -service service]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-help</code> </td>
|
|
<td align="left"> print help </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-localVM</code> ConnectorURL </td>
|
|
<td align="left"> connect to the VM on the same machine </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-port</code> <i>mbean server port</i> </td>
|
|
<td align="left"> specify mbean server port, if missing it will try to connect to MBean Server in the same VM </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-server</code> </td>
|
|
<td align="left"> specify mbean server (localhost by default) </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-service</code> NameNode|DataNode </td>
|
|
<td align="left"> specify jmx service. NameNode by default. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Dump JMX information from a service.</p></section><section>
|
|
<h3><a name="oev"></a><code>oev</code></h3>
|
|
<p>Usage: <code>hdfs oev [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE</code></p><section>
|
|
<h4><a name="Required_command_line_arguments:"></a>Required command line arguments:</h4>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-i</code>,<code>--inputFile</code> <i>arg</i> </td>
|
|
<td align="left"> edits file to process, xml (case insensitive) extension means XML format, any other filename means binary format </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-o</code>,<code>--outputFile</code> <i>arg</i> </td>
|
|
<td align="left"> Name of output file. If the specified file exists, it will be overwritten, format of the file is determined by -p option </td></tr>
|
|
</tbody>
|
|
</table></section><section>
|
|
<h4><a name="Optional_command_line_arguments:"></a>Optional command line arguments:</h4>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-f</code>,<code>--fix-txids</code> </td>
|
|
<td align="left"> Renumber the transaction IDs in the input, so that there are no gaps or invalid transaction IDs. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-h</code>,<code>--help</code> </td>
|
|
<td align="left"> Display usage information and exit </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-r</code>,<code>--recover</code> </td>
|
|
<td align="left"> When reading binary edit logs, use recovery mode. This will give you the chance to skip corrupt parts of the edit log. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-p</code>,<code>--processor</code> <i>arg</i> </td>
|
|
<td align="left"> Select which type of processor to apply against image file, currently supported processors are: binary (native binary format that Hadoop uses), xml (default, XML format), stats (prints statistics about edits file) </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-v</code>,<code>--verbose</code> </td>
|
|
<td align="left"> More verbose output, prints the input and output filenames, for processors that write to a file, also output to screen. On large image files this will dramatically increase processing time (default is false). </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Hadoop offline edits viewer. See <a href="./HdfsEditsViewer.html">Offline Edits Viewer Guide</a> for more info.</p></section></section><section>
|
|
<h3><a name="oiv"></a><code>oiv</code></h3>
|
|
<p>Usage: <code>hdfs oiv [OPTIONS] -i INPUT_FILE</code></p><section>
|
|
<h4><a name="Required_command_line_arguments:"></a>Required command line arguments:</h4>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-i</code>|<code>--inputFile</code> <i>input file</i> </td>
|
|
<td align="left"> Specify the input fsimage file (or XML file, if ReverseXML processor is used) to process. </td></tr>
|
|
</tbody>
|
|
</table></section><section>
|
|
<h4><a name="Optional_command_line_arguments:"></a>Optional command line arguments:</h4>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-o</code>,<code>--outputFile</code> <i>output file</i> </td>
|
|
<td align="left"> Specify the output filename, if the specified output processor generates one. If the specified file already exists, it is silently overwritten. (output to stdout by default) If the input file is an XML file, it also creates an <outputFile>.md5. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-p</code>,<code>--processor</code> <i>processor</i> </td>
|
|
<td align="left"> Specify the image processor to apply against the image file. Currently valid options are <code>Web</code> (default), <code>XML</code>, <code>Delimited</code>, <code>FileDistribution</code> and <code>ReverseXML</code>. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-addr</code> <i>address</i> </td>
|
|
<td align="left"> Specify the address(host:port) to listen. (localhost:5978 by default). This option is used with Web processor. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-maxSize</code> <i>size</i> </td>
|
|
<td align="left"> Specify the range [0, maxSize] of file sizes to be analyzed in bytes (128GB by default). This option is used with FileDistribution processor. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-step</code> <i>size</i> </td>
|
|
<td align="left"> Specify the granularity of the distribution in bytes (2MB by default). This option is used with FileDistribution processor. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-format</code> </td>
|
|
<td align="left"> Format the output result in a human-readable fashion rather than a number of bytes. (false by default). This option is used with FileDistribution processor. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-delimiter</code> <i>arg</i> </td>
|
|
<td align="left"> Delimiting string to use with Delimited processor. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-sp</code> </td>
|
|
<td align="left"> Whether to print Storage policy(default is false). This option is used with Delimited processor only. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-ec</code> </td>
|
|
<td align="left"> Whether to print Erasure coding policy(default is false). This option is used with Delimited processor only. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-m</code>,<code>--multiThread</code> <i>arg</i> </td>
|
|
<td align="left"> Specify multiThread to process sub-sections. This option is used with Delimited processor only. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-t</code>,<code>--temp</code> <i>temporary dir</i> </td>
|
|
<td align="left"> Use temporary dir to cache intermediate result to generate Delimited outputs. If not set, Delimited processor constructs the namespace in memory before outputting text. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-h</code>,<code>--help</code> </td>
|
|
<td align="left"> Display the tool usage and help information and exit. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Hadoop Offline Image Viewer for image files in Hadoop 2.4 or up. See <a href="./HdfsImageViewer.html">Offline Image Viewer Guide</a> for more info.</p></section></section><section>
|
|
<h3><a name="oiv_legacy"></a><code>oiv_legacy</code></h3>
|
|
<p>Usage: <code>hdfs oiv_legacy [OPTIONS] -i INPUT_FILE -o OUTPUT_FILE</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-i</code>,<code>--inputFile</code> <i>input file</i> </td>
|
|
<td align="left"> Specify the input fsimage file to process. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-o</code>,<code>--outputFile</code> <i>output file</i> </td>
|
|
<td align="left"> Specify the output filename, if the specified output processor generates one. If the specified file already exists, it is silently overwritten. </td></tr>
|
|
</tbody>
|
|
</table><section>
|
|
<h4><a name="Optional_command_line_arguments:"></a>Optional command line arguments:</h4>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-p</code>|<code>--processor</code> <i>processor</i> </td>
|
|
<td align="left"> Specify the image processor to apply against the image file. Valid options are Ls (default), XML, Delimited, Indented, FileDistribution and NameDistribution. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-maxSize</code> <i>size</i> </td>
|
|
<td align="left"> Specify the range [0, maxSize] of file sizes to be analyzed in bytes (128GB by default). This option is used with FileDistribution processor. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-step</code> <i>size</i> </td>
|
|
<td align="left"> Specify the granularity of the distribution in bytes (2MB by default). This option is used with FileDistribution processor. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-format</code> </td>
|
|
<td align="left"> Format the output result in a human-readable fashion rather than a number of bytes. (false by default). This option is used with FileDistribution processor. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-skipBlocks</code> </td>
|
|
<td align="left"> Do not enumerate individual blocks within files. This may save processing time and outfile file space on namespaces with very large files. The Ls processor reads the blocks to correctly determine file sizes and ignores this option. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-printToScreen</code> </td>
|
|
<td align="left"> Pipe output of processor to console as well as specified file. On extremely large namespaces, this may increase processing time by an order of magnitude. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-delimiter</code> <i>arg</i> </td>
|
|
<td align="left"> When used in conjunction with the Delimited processor, replaces the default tab delimiter with the string specified by <i>arg</i>. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-h</code>|<code>--help</code> </td>
|
|
<td align="left"> Display the tool usage and help information and exit. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Hadoop offline image viewer for older versions of Hadoop. See <a href="./HdfsImageViewer.html#oiv_legacy_Command">oiv_legacy Command</a> for more info.</p></section></section><section>
|
|
<h3><a name="snapshotDiff"></a><code>snapshotDiff</code></h3>
|
|
<p>Usage: <code>hdfs snapshotDiff <path> <fromSnapshot> <toSnapshot></code></p>
|
|
<p>Determine the difference between HDFS snapshots. See the <a href="./HdfsSnapshots.html#Get_Snapshots_Difference_Report">HDFS Snapshot Documentation</a> for more information.</p></section><section>
|
|
<h3><a name="version"></a><code>version</code></h3>
|
|
<p>Usage: <code>hdfs version</code></p>
|
|
<p>Prints the version.</p></section></section><section>
|
|
<h2><a name="Administration_Commands"></a>Administration Commands</h2>
|
|
<p>Commands useful for administrators of a hadoop cluster.</p><section>
|
|
<h3><a name="balancer"></a><code>balancer</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs balancer
|
|
[-policy <policy>]
|
|
[-threshold <threshold>]
|
|
[-exclude [-f <hosts-file> | <comma-separated list of hosts>]]
|
|
[-include [-f <hosts-file> | <comma-separated list of hosts>]]
|
|
[-source [-f <hosts-file> | <comma-separated list of hosts>]]
|
|
[-blockpools <comma-separated list of blockpool ids>]
|
|
[-idleiterations <idleiterations>]
|
|
[-runDuringUpgrade]
|
|
[-asService]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-policy</code> <policy> </td>
|
|
<td align="left"> <code>datanode</code> (default): Cluster is balanced if each datanode is balanced.<br /> <code>blockpool</code>: Cluster is balanced if each block pool in each datanode is balanced. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-threshold</code> <threshold> </td>
|
|
<td align="left"> Percentage of disk capacity. This overwrites the default threshold. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-exclude -f</code> <hosts-file> | <comma-separated list of hosts> </td>
|
|
<td align="left"> Excludes the specified datanodes from being balanced by the balancer. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-include -f</code> <hosts-file> | <comma-separated list of hosts> </td>
|
|
<td align="left"> Includes only the specified datanodes to be balanced by the balancer. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-source -f</code> <hosts-file> | <comma-separated list of hosts> </td>
|
|
<td align="left"> Pick only the specified datanodes as source nodes. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-blockpools</code> <comma-separated list of blockpool ids> </td>
|
|
<td align="left"> The balancer will only run on blockpools included in this list. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-idleiterations</code> <iterations> </td>
|
|
<td align="left"> Maximum number of idle iterations before exit. This overwrites the default idleiterations(5). </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-runDuringUpgrade</code> </td>
|
|
<td align="left"> Whether to run the balancer during an ongoing HDFS upgrade. This is usually not desired since it will not affect used space on over-utilized machines. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-asService</code> </td>
|
|
<td align="left"> Run Balancer as a long running service. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-hotBlockTimeInterval</code> </td>
|
|
<td align="left"> Prefer moving cold blocks i.e blocks associated with files accessed or modified before the specified time interval. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-h</code>|<code>--help</code> </td>
|
|
<td align="left"> Display the tool usage and help information and exit. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs a cluster balancing utility. An administrator can simply press Ctrl-C to stop the rebalancing process. See <a href="./HdfsUserGuide.html#Balancer">Balancer</a> for more details.</p>
|
|
<p>Note that the <code>blockpool</code> policy is stricter than the <code>datanode</code> policy.</p>
|
|
<p>Besides the above command options, a pinning feature is introduced starting from 2.7.0 to prevent certain replicas from getting moved by balancer/mover. This pinning feature is disabled by default, and can be enabled by configuration property “dfs.datanode.block-pinning.enabled”. When enabled, this feature only affects blocks that are written to favored nodes specified in the create() call. This feature is useful when we want to maintain the data locality, for applications such as HBase regionserver.</p>
|
|
<p>If you want to run Balancer as a long-running service, please start Balancer using <code>-asService</code> parameter with daemon-mode. You can do this by using the following command: <code>hdfs --daemon start balancer -asService</code>, or just use sbin/start-balancer.sh script with parameter <code>-asService</code>.</p></section><section>
|
|
<h3><a name="cacheadmin"></a><code>cacheadmin</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre>hdfs cacheadmin [-addDirective -path <path> -pool <pool-name> [-force] [-replication <replication>] [-ttl <time-to-live>]]
|
|
hdfs cacheadmin [-modifyDirective -id <id> [-path <path>] [-force] [-replication <replication>] [-pool <pool-name>] [-ttl <time-to-live>]]
|
|
hdfs cacheadmin [-listDirectives [-stats] [-path <path>] [-pool <pool>] [-id <id>]]
|
|
hdfs cacheadmin [-removeDirective <id>]
|
|
hdfs cacheadmin [-removeDirectives -path <path>]
|
|
hdfs cacheadmin [-addPool <name> [-owner <owner>] [-group <group>] [-mode <mode>] [-limit <limit>] [-maxTtl <maxTtl>]]
|
|
hdfs cacheadmin [-modifyPool <name> [-owner <owner>] [-group <group>] [-mode <mode>] [-limit <limit>] [-maxTtl <maxTtl>]]
|
|
hdfs cacheadmin [-removePool <name>]
|
|
hdfs cacheadmin [-listPools [-stats] [<name>]]
|
|
hdfs cacheadmin [-help <command-name>]
|
|
</pre></div></div>
|
|
|
|
<p>See the <a href="./CentralizedCacheManagement.html#cacheadmin_command-line_interface">HDFS Cache Administration Documentation</a> for more information.</p></section><section>
|
|
<h3><a name="crypto"></a><code>crypto</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs crypto -createZone -keyName <keyName> -path <path>
|
|
hdfs crypto -listZones
|
|
hdfs crypto -provisionTrash -path <path>
|
|
hdfs crypto -help <command-name>
|
|
</pre></div></div>
|
|
|
|
<p>See the <a href="./TransparentEncryption.html#crypto_command-line_interface">HDFS Transparent Encryption Documentation</a> for more information.</p></section><section>
|
|
<h3><a name="datanode"></a><code>datanode</code></h3>
|
|
<p>Usage: <code>hdfs datanode [-regular | -rollback | -rollingupgrade rollback]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-regular</code> </td>
|
|
<td align="left"> Normal datanode startup (default). </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-rollback</code> </td>
|
|
<td align="left"> Rollback the datanode to the previous version. This should be used after stopping the datanode and distributing the old hadoop version. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-rollingupgrade</code> rollback </td>
|
|
<td align="left"> Rollback a rolling upgrade operation. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs a HDFS datanode.</p></section><section>
|
|
<h3><a name="dfsadmin"></a><code>dfsadmin</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs dfsadmin [-report [-live] [-dead] [-decommissioning] [-enteringmaintenance] [-inmaintenance] [-slownodes]]
|
|
hdfs dfsadmin [-safemode enter | leave | get | wait | forceExit]
|
|
hdfs dfsadmin [-saveNamespace [-beforeShutdown]]
|
|
hdfs dfsadmin [-rollEdits]
|
|
hdfs dfsadmin [-restoreFailedStorage true |false |check]
|
|
hdfs dfsadmin [-refreshNodes]
|
|
hdfs dfsadmin [-setQuota <quota> <dirname>...<dirname>]
|
|
hdfs dfsadmin [-clrQuota <dirname>...<dirname>]
|
|
hdfs dfsadmin [-setSpaceQuota <quota> [-storageType <storagetype>] <dirname>...<dirname>]
|
|
hdfs dfsadmin [-clrSpaceQuota [-storageType <storagetype>] <dirname>...<dirname>]
|
|
hdfs dfsadmin [-finalizeUpgrade]
|
|
hdfs dfsadmin [-rollingUpgrade [<query> |<prepare> |<finalize>]]
|
|
hdfs dfsadmin [-upgrade [query | finalize]
|
|
hdfs dfsadmin [-refreshServiceAcl]
|
|
hdfs dfsadmin [-refreshUserToGroupsMappings]
|
|
hdfs dfsadmin [-refreshSuperUserGroupsConfiguration]
|
|
hdfs dfsadmin [-refreshCallQueue]
|
|
hdfs dfsadmin [-refresh <host:ipc_port> <key> [arg1..argn]]
|
|
hdfs dfsadmin [-reconfig <namenode|datanode> <host:ipc_port|livenodes> <start |status |properties>]
|
|
hdfs dfsadmin [-printTopology]
|
|
hdfs dfsadmin [-refreshNamenodes datanodehost:port]
|
|
hdfs dfsadmin [-getVolumeReport datanodehost:port]
|
|
hdfs dfsadmin [-deleteBlockPool datanode-host:port blockpoolId [force]]
|
|
hdfs dfsadmin [-setBalancerBandwidth <bandwidth in bytes per second>]
|
|
hdfs dfsadmin [-getBalancerBandwidth <datanode_host:ipc_port>]
|
|
hdfs dfsadmin [-fetchImage <local directory>]
|
|
hdfs dfsadmin [-allowSnapshot <snapshotDir>]
|
|
hdfs dfsadmin [-disallowSnapshot <snapshotDir>]
|
|
hdfs dfsadmin [-shutdownDatanode <datanode_host:ipc_port> [upgrade]]
|
|
hdfs dfsadmin [-evictWriters <datanode_host:ipc_port>]
|
|
hdfs dfsadmin [-getDatanodeInfo <datanode_host:ipc_port>]
|
|
hdfs dfsadmin [-metasave filename]
|
|
hdfs dfsadmin [-triggerBlockReport [-incremental] <datanode_host:ipc_port> [-namenode <namenode_host:ipc_port>]]
|
|
hdfs dfsadmin [-listOpenFiles [-blockingDecommission] [-path <path>]]
|
|
hdfs dfsadmin [-help [cmd]]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-report</code> <code>[-live]</code> <code>[-dead]</code> <code>[-decommissioning]</code> <code>[-enteringmaintenance]</code> <code>[-inmaintenance]</code> <code>[-slownodes]</code> </td>
|
|
<td align="left"> Reports basic filesystem information and statistics, The dfs usage can be different from “du” usage, because it measures raw space used by replication, checksums, snapshots and etc. on all the DNs. Optional flags may be used to filter the list of displayed DataNodes. Filters are either based on the DN state (e.g. live, dead, decommissioning) or the nature of the DN (e.g. slow nodes - nodes with higher latency than their peers). </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-safemode</code> enter|leave|get|wait|forceExit </td>
|
|
<td align="left"> Safe mode maintenance command. Safe mode is a Namenode state in which it <br />1. does not accept changes to the name space (read-only) <br />2. does not replicate or delete blocks. <br />Safe mode is entered automatically at Namenode startup, and leaves safe mode automatically when the configured minimum percentage of blocks satisfies the minimum replication condition. If Namenode detects any anomaly then it will linger in safe mode till that issue is resolved. If that anomaly is the consequence of a deliberate action, then administrator can use -safemode forceExit to exit safe mode. The cases where forceExit may be required are<br /> 1. Namenode metadata is not consistent. If Namenode detects that metadata has been modified out of band and can cause data loss, then Namenode will enter forceExit state. At that point user can either restart Namenode with correct metadata files or forceExit (if data loss is acceptable).<br />2. Rollback causes metadata to be replaced and rarely it can trigger safe mode forceExit state in Namenode. In that case you may proceed by issuing -safemode forceExit.<br /> Safe mode can also be entered manually, but then it can only be turned off manually as well. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-saveNamespace</code> <code>[-beforeShutdown]</code> </td>
|
|
<td align="left"> Save current namespace into storage directories and reset edits log. Requires safe mode. If the “beforeShutdown” option is given, the NameNode does a checkpoint if and only if no checkpoint has been done during a time window (a configurable number of checkpoint periods). This is usually used before shutting down the NameNode to prevent potential fsimage/editlog corruption. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-rollEdits</code> </td>
|
|
<td align="left"> Rolls the edit log on the active NameNode. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-restoreFailedStorage</code> true|false|check </td>
|
|
<td align="left"> This option will turn on/off automatic attempt to restore failed storage replicas. If a failed storage becomes available again the system will attempt to restore edits and/or fsimage during checkpoint. ‘check’ option will return current setting. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-refreshNodes</code> </td>
|
|
<td align="left"> Re-read the hosts and exclude files to update the set of Datanodes that are allowed to connect to the Namenode and those that should be decommissioned or recommissioned. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-setQuota</code> <quota> <dirname>…<dirname> </td>
|
|
<td align="left"> See <a href="../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands">HDFS Quotas Guide</a> for the detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-clrQuota</code> <dirname>…<dirname> </td>
|
|
<td align="left"> See <a href="../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands">HDFS Quotas Guide</a> for the detail. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-setSpaceQuota</code> <quota> <code>[-storageType <storagetype>]</code> <dirname>…<dirname> </td>
|
|
<td align="left"> See <a href="../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands">HDFS Quotas Guide</a> for the detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-clrSpaceQuota</code> <code>[-storageType <storagetype>]</code> <dirname>…<dirname> </td>
|
|
<td align="left"> See <a href="../hadoop-hdfs/HdfsQuotaAdminGuide.html#Administrative_Commands">HDFS Quotas Guide</a> for the detail. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-finalizeUpgrade</code> </td>
|
|
<td align="left"> Finalize upgrade of HDFS. Datanodes delete their previous version working directories, followed by Namenode doing the same. This completes the upgrade process. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-rollingUpgrade</code> [<query>|<prepare>|<finalize>] </td>
|
|
<td align="left"> See <a href="../hadoop-hdfs/HdfsRollingUpgrade.html#dfsadmin_-rollingUpgrade">Rolling Upgrade document</a> for the detail. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-upgrade</code> query|finalize </td>
|
|
<td align="left"> Query the current upgrade status.<br />Finalize upgrade of HDFS (equivalent to -finalizeUpgrade). </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-refreshServiceAcl</code> </td>
|
|
<td align="left"> Reload the service-level authorization policy file. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-refreshUserToGroupsMappings</code> </td>
|
|
<td align="left"> Refresh user-to-groups mappings. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-refreshSuperUserGroupsConfiguration</code> </td>
|
|
<td align="left"> Refresh superuser proxy groups mappings </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-refreshCallQueue</code> </td>
|
|
<td align="left"> Reload the call queue from config. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-refresh</code> <host:ipc_port> <key> [arg1..argn] </td>
|
|
<td align="left"> Triggers a runtime-refresh of the resource specified by <key> on <host:ipc_port>. All other args after are sent to the host. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-reconfig</code> <datanode |namenode> <host:ipc_port|livenodes> <start|status|properties> </td>
|
|
<td align="left"> Starts reconfiguration or gets the status of an ongoing reconfiguration, or gets a list of reconfigurable properties. The second parameter specifies the node type. The third parameter specifies host address. For start or status, datanode supports livenodes as third parameter, which will start or retrieve reconfiguration on all live datanodes. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-printTopology</code> </td>
|
|
<td align="left"> Print a tree of the racks and their nodes as reported by the Namenode </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-refreshNamenodes</code> datanodehost:port </td>
|
|
<td align="left"> For the given datanode, reloads the configuration files, stops serving the removed block-pools and starts serving new block-pools. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-getVolumeReport</code> datanodehost:port </td>
|
|
<td align="left"> For the given datanode, get the volume report. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-deleteBlockPool</code> datanode-host:port blockpoolId [force] </td>
|
|
<td align="left"> If force is passed, block pool directory for the given blockpool id on the given datanode is deleted along with its contents, otherwise the directory is deleted only if it is empty. The command will fail if datanode is still serving the block pool. Refer to refreshNamenodes to shutdown a block pool service on a datanode. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-setBalancerBandwidth</code> <bandwidth in bytes per second> </td>
|
|
<td align="left"> Changes the network bandwidth used by each datanode during HDFS block balancing. <bandwidth> is the maximum number of bytes per second that will be used by each datanode. This value overrides the dfs.datanode.balance.bandwidthPerSec parameter. NOTE: The new value is not persistent on the DataNode. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-getBalancerBandwidth</code> <datanode_host:ipc_port> </td>
|
|
<td align="left"> Get the network bandwidth(in bytes per second) for the given datanode. This is the maximum network bandwidth used by the datanode during HDFS block balancing.</td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-fetchImage</code> <local directory> </td>
|
|
<td align="left"> Downloads the most recent fsimage from the NameNode and saves it in the specified local directory. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-allowSnapshot</code> <snapshotDir> </td>
|
|
<td align="left"> Allowing snapshots of a directory to be created. If the operation completes successfully, the directory becomes snapshottable. See the <a href="./HdfsSnapshots.html">HDFS Snapshot Documentation</a> for more information. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-disallowSnapshot</code> <snapshotDir> </td>
|
|
<td align="left"> Disallowing snapshots of a directory to be created. All snapshots of the directory must be deleted before disallowing snapshots. See the <a href="./HdfsSnapshots.html">HDFS Snapshot Documentation</a> for more information. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-shutdownDatanode</code> <datanode_host:ipc_port> [upgrade] </td>
|
|
<td align="left"> Submit a shutdown request for the given datanode. See <a href="./HdfsRollingUpgrade.html#dfsadmin_-shutdownDatanode">Rolling Upgrade document</a> for the detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-evictWriters</code> <datanode_host:ipc_port> </td>
|
|
<td align="left"> Make the datanode evict all clients that are writing a block. This is useful if decommissioning is hung due to slow writers. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-getDatanodeInfo</code> <datanode_host:ipc_port> </td>
|
|
<td align="left"> Get the information about the given datanode. See <a href="./HdfsRollingUpgrade.html#dfsadmin_-getDatanodeInfo">Rolling Upgrade document</a> for the detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-metasave</code> filename </td>
|
|
<td align="left"> Save Namenode’s primary data structures to <i>filename</i> in the directory specified by hadoop.log.dir property. <i>filename</i> is overwritten if it exists. <i>filename</i> will contain one line for each of the following<br />1. Datanodes heart beating with Namenode<br />2. Blocks waiting to be replicated<br />3. Blocks currently being replicated<br />4. Blocks waiting to be deleted </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-triggerBlockReport</code> <code>[-incremental]</code> <datanode_host:ipc_port> <code>[-namenode <namenode_host:ipc_port>]</code> </td>
|
|
<td align="left"> Trigger a block report for the given datanode. If ‘incremental’ is specified, it will be otherwise, it will be a full block report. If ‘-namenode <namenode_host:ipc_port>’ is given, it only sends block report to a specified namenode. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-listOpenFiles</code> <code>[-blockingDecommission]</code> <code>[-path <path>]</code> </td>
|
|
<td align="left"> List all open files currently managed by the NameNode along with client name and client machine accessing them. Open files list will be filtered by given type and path. Add -blockingDecommission option if you only want to list open files that are blocking the DataNode decommissioning. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-help</code> [cmd] </td>
|
|
<td align="left"> Displays help for the given command or all commands if none is specified. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs a HDFS dfsadmin client.</p></section><section>
|
|
<h3><a name="dfsrouter"></a><code>dfsrouter</code></h3>
|
|
<p>Usage: <code>hdfs dfsrouter</code></p>
|
|
<p>Runs the DFS router. See <a href="../hadoop-hdfs-rbf/HDFSRouterFederation.html#Router">Router</a> for more info.</p></section><section>
|
|
<h3><a name="dfsrouteradmin"></a><code>dfsrouteradmin</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs dfsrouteradmin
|
|
[-add <source> <nameservice1, nameservice2, ...> <destination> [-readonly] [-faulttolerant] [-order HASH|LOCAL|RANDOM|HASH_ALL] -owner <owner> -group <group> -mode <mode>]
|
|
[-update <source> [<nameservice1, nameservice2, ...> <destination>] [-readonly true|false] [-faulttolerant true|false] [-order HASH|LOCAL|RANDOM|HASH_ALL] -owner <owner> -group <group> -mode <mode>]
|
|
[-rm <source>]
|
|
[-ls [-d] <path>]
|
|
[-getDestination <path>]
|
|
[-setQuota <path> -nsQuota <nsQuota> -ssQuota <quota in bytes or quota size string>]
|
|
[-setStorageTypeQuota <path> -storageType <storage type> <quota in bytes or quota size string>]
|
|
[-clrQuota <path>]
|
|
[-clrStorageTypeQuota <path>]
|
|
[-safemode enter | leave | get]
|
|
[-nameservice disable | enable <nameservice>]
|
|
[-getDisabledNameservices]
|
|
[-refresh]
|
|
[-refreshRouterArgs <host:ipc_port> <key> [arg1..argn]]
|
|
[-refreshSuperUserGroupsConfiguration]
|
|
[-refreshCallQueue]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-add</code> <i>source</i> <i>nameservices</i> <i>destination</i> </td>
|
|
<td align="left"> Add a mount table entry or update if it exists. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-update</code> <i>source</i> <i>nameservices</i> <i>destination</i> </td>
|
|
<td align="left"> Update a mount table entry attributes. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-rm</code> <i>source</i> </td>
|
|
<td align="left"> Remove mount point of specified path. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-ls</code> <code>[-d]</code> <i>path</i> </td>
|
|
<td align="left"> List mount points under specified path. Specify -d parameter to get detailed listing.</td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-getDestination</code> <i>path</i> </td>
|
|
<td align="left"> Get the subcluster where a file is or should be created. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-setQuota</code> <i>path</i> <code>-nsQuota</code> <i>nsQuota</i> <code>-ssQuota</code> <i>ssQuota</i> </td>
|
|
<td align="left"> Set quota for specified path. See <a href="./HdfsQuotaAdminGuide.html">HDFS Quotas Guide</a> for the quota detail. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-setStorageTypeQuota</code> <i>path</i> <code>-storageType</code> <i>storageType</i> <i>stQuota</i> </td>
|
|
<td align="left"> Set storage type quota for specified path. See <a href="./HdfsQuotaAdminGuide.html">HDFS Quotas Guide</a> for the quota detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-clrQuota</code> <i>path</i> </td>
|
|
<td align="left"> Clear quota of given mount point. See <a href="./HdfsQuotaAdminGuide.html">HDFS Quotas Guide</a> for the quota detail. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-clrStorageTypeQuota</code> <i>path</i> </td>
|
|
<td align="left"> Clear storage type quota of given mount point. See <a href="./HdfsQuotaAdminGuide.html">HDFS Quotas Guide</a> for the quota detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-safemode</code> <code>enter</code> <code>leave</code> <code>get</code> </td>
|
|
<td align="left"> Manually set the Router entering or leaving safe mode. The option <i>get</i> will be used for verifying if the Router is in safe mode state. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-nameservice</code> <code>disable</code> <code>enable</code> <i>nameservice</i> </td>
|
|
<td align="left"> Disable/enable a name service from the federation. If disabled, requests will not go to that name service. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-getDisabledNameservices</code> </td>
|
|
<td align="left"> Get the name services that are disabled in the federation. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-refresh</code> </td>
|
|
<td align="left"> Update mount table cache of the connected router. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>refreshRouterArgs</code> <host:ipc_port> <key> [arg1..argn] </td>
|
|
<td align="left"> To trigger a runtime-refresh of the resource specified by <key> on <host:ipc_port>. For example, to enable white list checking, we just need to send a refresh command other than restart the router server. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-refreshSuperUserGroupsConfiguration</code> </td>
|
|
<td align="left"> Refresh superuser proxy groups mappings on Router. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-refreshCallQueue</code> </td>
|
|
<td align="left"> Reload the call queue from config for Router. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>The commands for managing Router-based federation. See <a href="../hadoop-hdfs-rbf/HDFSRouterFederation.html#Mount_table_management">Mount table management</a> for more info.</p></section><section>
|
|
<h3><a name="diskbalancer"></a><code>diskbalancer</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs diskbalancer
|
|
[-plan <datanode> -fs <namenodeURI>]
|
|
[-execute <planfile>]
|
|
[-query <datanode>]
|
|
[-cancel <planfile>]
|
|
[-cancel <planID> -node <datanode>]
|
|
[-report -node <file://> | [<DataNodeID|IP|Hostname>,...]]
|
|
[-report -node -top <topnum>]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left">-plan</td>
|
|
<td align="left"> Creates a diskbalancer plan</td></tr>
|
|
<tr class="a">
|
|
<td align="left">-execute</td>
|
|
<td align="left"> Executes a given plan on a datanode</td></tr>
|
|
<tr class="b">
|
|
<td align="left">-query</td>
|
|
<td align="left"> Gets the current diskbalancer status from a datanode</td></tr>
|
|
<tr class="a">
|
|
<td align="left">-cancel</td>
|
|
<td align="left"> Cancels a running plan</td></tr>
|
|
<tr class="b">
|
|
<td align="left">-report</td>
|
|
<td align="left"> Reports the volume information from datanode(s)</td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs the diskbalancer CLI. See <a href="./HDFSDiskbalancer.html">HDFS Diskbalancer</a> for more information on this command.</p></section><section>
|
|
<h3><a name="ec"></a><code>ec</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs ec [generic options]
|
|
[-setPolicy -policy <policyName> -path <path>]
|
|
[-getPolicy -path <path>]
|
|
[-unsetPolicy -path <path>]
|
|
[-listPolicies]
|
|
[-addPolicies -policyFile <file>]
|
|
[-listCodecs]
|
|
[-enablePolicy -policy <policyName>]
|
|
[-disablePolicy -policy <policyName>]
|
|
[-removePolicy -policy <policyName>]
|
|
[-verifyClusterSetup -policy <policyName>...<policyName>]
|
|
[-help [cmd ...]]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left">-setPolicy</td>
|
|
<td align="left"> Set a specified ErasureCoding policy to a directory</td></tr>
|
|
<tr class="a">
|
|
<td align="left">-getPolicy</td>
|
|
<td align="left"> Get ErasureCoding policy information about a specified path</td></tr>
|
|
<tr class="b">
|
|
<td align="left">-unsetPolicy</td>
|
|
<td align="left"> Unset an ErasureCoding policy set by a previous call to “setPolicy” on a directory </td></tr>
|
|
<tr class="a">
|
|
<td align="left">-listPolicies</td>
|
|
<td align="left"> Lists all supported ErasureCoding policies</td></tr>
|
|
<tr class="b">
|
|
<td align="left">-addPolicies</td>
|
|
<td align="left"> Add a list of erasure coding policies</td></tr>
|
|
<tr class="a">
|
|
<td align="left">-listCodecs</td>
|
|
<td align="left"> Get the list of supported erasure coding codecs and coders in system</td></tr>
|
|
<tr class="b">
|
|
<td align="left">-enablePolicy</td>
|
|
<td align="left"> Enable an ErasureCoding policy in system</td></tr>
|
|
<tr class="a">
|
|
<td align="left">-disablePolicy</td>
|
|
<td align="left"> Disable an ErasureCoding policy in system</td></tr>
|
|
<tr class="b">
|
|
<td align="left">-removePolicy</td>
|
|
<td align="left"> Remove an ErasureCoding policy from system</td></tr>
|
|
<tr class="a">
|
|
<td align="left">-verifyClusterSetup</td>
|
|
<td align="left"> Verify if the cluster setup can support a list of erasure coding policies</td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs the ErasureCoding CLI. See <a href="./HDFSErasureCoding.html#Administrative_commands">HDFS ErasureCoding</a> for more information on this command.</p></section><section>
|
|
<h3><a name="haadmin"></a><code>haadmin</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs haadmin -transitionToActive <serviceId> [--forceactive]
|
|
hdfs haadmin -transitionToStandby <serviceId>
|
|
hdfs haadmin -transitionToObserver <serviceId>
|
|
hdfs haadmin -failover [--forcefence] [--forceactive] <serviceId> <serviceId>
|
|
hdfs haadmin -getServiceState <serviceId>
|
|
hdfs haadmin -getAllServiceState
|
|
hdfs haadmin -checkHealth <serviceId>
|
|
hdfs haadmin -help <command>
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-checkHealth</code> </td>
|
|
<td align="left"> check the health of the given NameNode </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-failover</code> </td>
|
|
<td align="left"> initiate a failover between two NameNodes </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-getServiceState</code> </td>
|
|
<td align="left"> determine whether the given NameNode is Active or Standby </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-getAllServiceState</code> </td>
|
|
<td align="left"> returns the state of all the NameNodes </td>
|
|
<td> </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-transitionToActive</code> </td>
|
|
<td align="left"> transition the state of the given NameNode to Active (Warning: No fencing is done) </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-transitionToStandby</code> </td>
|
|
<td align="left"> transition the state of the given NameNode to Standby (Warning: No fencing is done) </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-transitionToObserver</code> </td>
|
|
<td align="left"> transition the state of the given NameNode to Observer (Warning: No fencing is done) </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-help</code> [cmd] </td>
|
|
<td align="left"> Displays help for the given command or all commands if none is specified. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>See <a href="./HDFSHighAvailabilityWithNFS.html#Administrative_commands">HDFS HA with NFS</a> or <a href="./HDFSHighAvailabilityWithQJM.html#Administrative_commands">HDFS HA with QJM</a> for more information on this command.</p></section><section>
|
|
<h3><a name="journalnode"></a><code>journalnode</code></h3>
|
|
<p>Usage: <code>hdfs journalnode</code></p>
|
|
<p>This command starts a journalnode for use with <a href="./HDFSHighAvailabilityWithQJM.html#Administrative_commands">HDFS HA with QJM</a>.</p></section><section>
|
|
<h3><a name="mover"></a><code>mover</code></h3>
|
|
<p>Usage: <code>hdfs mover [-p <files/dirs> | -f <local file name>]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-f</code> <local file> </td>
|
|
<td align="left"> Specify a local file containing a list of HDFS files/dirs to migrate. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-p</code> <files/dirs> </td>
|
|
<td align="left"> Specify a space separated list of HDFS files/dirs to migrate. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs the data migration utility. See <a href="./ArchivalStorage.html#Mover_-_A_New_Data_Migration_Tool">Mover</a> for more details.</p>
|
|
<p>Note that, when both -p and -f options are omitted, the default path is the root directory.</p>
|
|
<p>In addition, a pinning feature is introduced starting from 2.7.0 to prevent certain replicas from getting moved by balancer/mover. This pinning feature is disabled by default, and can be enabled by configuration property “dfs.datanode.block-pinning.enabled”. When enabled, this feature only affects blocks that are written to favored nodes specified in the create() call. This feature is useful when we want to maintain the data locality, for applications such as HBase regionserver.</p></section><section>
|
|
<h3><a name="namenode"></a><code>namenode</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs namenode [-backup] |
|
|
[-checkpoint] |
|
|
[-format [-clusterid cid ] [-force] [-nonInteractive] ] |
|
|
[-upgrade [-clusterid cid] [-renameReserved<k-v pairs>] ] |
|
|
[-upgradeOnly [-clusterid cid] [-renameReserved<k-v pairs>] ] |
|
|
[-rollback] |
|
|
[-rollingUpgrade <rollback |started> ] |
|
|
[-importCheckpoint] |
|
|
[-initializeSharedEdits] |
|
|
[-bootstrapStandby [-force] [-nonInteractive] [-skipSharedEditsCheck] ] |
|
|
[-recover [-force] ] |
|
|
[-metadataVersion ]
|
|
</pre></div></div>
|
|
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-backup</code> </td>
|
|
<td align="left"> Start backup node. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-checkpoint</code> </td>
|
|
<td align="left"> Start checkpoint node. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-format</code> <code>[-clusterid cid]</code> </td>
|
|
<td align="left"> Formats the specified NameNode. It starts the NameNode, formats it and then shut it down. Will throw NameNodeFormatException if name dir already exist and if reformat is disabled for cluster. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-upgrade</code> <code>[-clusterid cid]</code> [<code>-renameReserved</code> <k-v pairs>] </td>
|
|
<td align="left"> Namenode should be started with upgrade option after the distribution of new Hadoop version. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-upgradeOnly</code> <code>[-clusterid cid]</code> [<code>-renameReserved</code> <k-v pairs>] </td>
|
|
<td align="left"> Upgrade the specified NameNode and then shutdown it. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-rollback</code> </td>
|
|
<td align="left"> Rollback the NameNode to the previous version. This should be used after stopping the cluster and distributing the old Hadoop version. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-rollingUpgrade</code> <rollback|started> </td>
|
|
<td align="left"> See <a href="./HdfsRollingUpgrade.html#NameNode_Startup_Options">Rolling Upgrade document</a> for the detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-importCheckpoint</code> </td>
|
|
<td align="left"> Loads image from a checkpoint directory and save it into the current one. Checkpoint dir is read from property dfs.namenode.checkpoint.dir </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-initializeSharedEdits</code> </td>
|
|
<td align="left"> Format a new shared edits dir and copy in enough edit log segments so that the standby NameNode can start up. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-bootstrapStandby</code> <code>[-force]</code> <code>[-nonInteractive]</code> <code>[-skipSharedEditsCheck]</code> </td>
|
|
<td align="left"> Allows the standby NameNode’s storage directories to be bootstrapped by copying the latest namespace snapshot from the active NameNode. This is used when first configuring an HA cluster. The option -force or -nonInteractive has the same meaning as that described in namenode -format command. -skipSharedEditsCheck option skips edits check which ensures that we have enough edits already in the shared directory to start up from the last checkpoint on the active. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-recover</code> <code>[-force]</code> </td>
|
|
<td align="left"> Recover lost metadata on a corrupt filesystem. See <a href="./HdfsUserGuide.html#Recovery_Mode">HDFS User Guide</a> for the detail. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-metadataVersion</code> </td>
|
|
<td align="left"> Verify that configured directories exist, then print the metadata versions of the software and the image. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs the namenode. More info about the upgrade and rollback is at <a href="./HdfsUserGuide.html#Upgrade_and_Rollback">Upgrade Rollback</a>.</p></section><section>
|
|
<h3><a name="nfs3"></a><code>nfs3</code></h3>
|
|
<p>Usage: <code>hdfs nfs3</code></p>
|
|
<p>This command starts the NFS3 gateway for use with the <a href="./HdfsNfsGateway.html#Start_and_stop_NFS_gateway_service">HDFS NFS3 Service</a>.</p></section><section>
|
|
<h3><a name="portmap"></a><code>portmap</code></h3>
|
|
<p>Usage: <code>hdfs portmap</code></p>
|
|
<p>This command starts the RPC portmap for use with the <a href="./HdfsNfsGateway.html#Start_and_stop_NFS_gateway_service">HDFS NFS3 Service</a>.</p></section><section>
|
|
<h3><a name="secondarynamenode"></a><code>secondarynamenode</code></h3>
|
|
<p>Usage: <code>hdfs secondarynamenode [-checkpoint [force]] | [-format] | [-geteditsize]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-checkpoint</code> [force] </td>
|
|
<td align="left"> Checkpoints the SecondaryNameNode if EditLog size >= fs.checkpoint.size. If <code>force</code> is used, checkpoint irrespective of EditLog size. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-format</code> </td>
|
|
<td align="left"> Format the local storage during startup. </td></tr>
|
|
<tr class="b">
|
|
<td align="left"> <code>-geteditsize</code> </td>
|
|
<td align="left"> Prints the number of uncheckpointed transactions on the NameNode. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Runs the HDFS secondary namenode. See <a href="./HdfsUserGuide.html#Secondary_NameNode">Secondary Namenode</a> for more info.</p></section><section>
|
|
<h3><a name="storagepolicies"></a><code>storagepolicies</code></h3>
|
|
<p>Usage:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre> hdfs storagepolicies
|
|
[-listPolicies]
|
|
[-setStoragePolicy -path <path> -policy <policy>]
|
|
[-getStoragePolicy -path <path>]
|
|
[-unsetStoragePolicy -path <path>]
|
|
[-satisfyStoragePolicy -path <path>]
|
|
[-isSatisfierRunning]
|
|
[-help <command-name>]
|
|
</pre></div></div>
|
|
|
|
<p>Lists out all/Gets/sets/unsets storage policies. See the <a href="./ArchivalStorage.html">HDFS Storage Policy Documentation</a> for more information.</p></section><section>
|
|
<h3><a name="zkfc"></a><code>zkfc</code></h3>
|
|
<p>Usage: <code>hdfs zkfc [-formatZK [-force] [-nonInteractive]]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-formatZK</code> </td>
|
|
<td align="left"> Format the Zookeeper instance. -force: formats the znode if the znode exists. -nonInteractive: formats the znode aborts if the znode exists, unless -force option is specified. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-h</code> </td>
|
|
<td align="left"> Display help </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>This command starts a Zookeeper Failover Controller process for use with <a href="./HDFSHighAvailabilityWithQJM.html#Administrative_commands">HDFS HA with QJM</a>.</p></section></section><section>
|
|
<h2><a name="Debug_Commands"></a>Debug Commands</h2>
|
|
<p>Useful commands to help administrators debug HDFS issues. These commands are for advanced users only.</p><section>
|
|
<h3><a name="verifyMeta"></a><code>verifyMeta</code></h3>
|
|
<p>Usage: <code>hdfs debug verifyMeta -meta <metadata-file> [-block <block-file>]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-block</code> <i>block-file</i> </td>
|
|
<td align="left"> Optional parameter to specify the absolute path for the block file on the local file system of the data node. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-meta</code> <i>metadata-file</i> </td>
|
|
<td align="left"> Absolute path for the metadata file on the local file system of the data node. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Verify HDFS metadata and block files. If a block file is specified, we will verify that the checksums in the metadata file match the block file.</p></section><section>
|
|
<h3><a name="computeMeta"></a><code>computeMeta</code></h3>
|
|
<p>Usage: <code>hdfs debug computeMeta -block <block-file> -out <output-metadata-file></code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-block</code> <i>block-file</i> </td>
|
|
<td align="left"> Absolute path for the block file on the local file system of the data node. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> <code>-out</code> <i>output-metadata-file</i> </td>
|
|
<td align="left"> Absolute path for the output metadata file to store the checksum computation result from the block file. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Compute HDFS metadata from block files. If a block file is specified, we will compute the checksums from the block file, and save it to the specified output metadata file.</p>
|
|
<p><b>NOTE</b>: Use at your own risk! If the block file is corrupt and you overwrite it’s meta file, it will show up as ‘good’ in HDFS, but you can’t read the data. Only use as a last measure, and when you are 100% certain the block file is good.</p></section><section>
|
|
<h3><a name="recoverLease"></a><code>recoverLease</code></h3>
|
|
<p>Usage: <code>hdfs debug recoverLease -path <path> [-retries <num-retries>]</code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> [<code>-path</code> <i>path</i>] </td>
|
|
<td align="left"> HDFS path for which to recover the lease. </td></tr>
|
|
<tr class="a">
|
|
<td align="left"> [<code>-retries</code> <i>num-retries</i>] </td>
|
|
<td align="left"> Number of times the client will retry calling recoverLease. The default number of retries is 1. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Recover the lease on the specified path. The path must reside on an HDFS file system. The default number of retries is 1.</p></section><section>
|
|
<h3><a name="verifyEC"></a><code>verifyEC</code></h3>
|
|
<p>Usage: <code>hdfs debug verifyEC -file <file></code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> [<code>-file</code> <i>EC-file</i>] </td>
|
|
<td align="left"> HDFS EC file to be verified. </td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Verify the correctness of erasure coding on an erasure coded file.</p></section></section><section>
|
|
<h2><a name="dfsadmin_with_ViewFsOverloadScheme"></a>dfsadmin with ViewFsOverloadScheme</h2>
|
|
<p>Usage: <code>hdfs dfsadmin -fs <child fs mount link URI> <dfsadmin command options></code></p>
|
|
<table border="0" class="bodyTable">
|
|
<thead>
|
|
|
|
<tr class="a">
|
|
<th align="left"> COMMAND_OPTION </th>
|
|
<th align="left"> Description </th></tr>
|
|
</thead><tbody>
|
|
|
|
<tr class="b">
|
|
<td align="left"> <code>-fs</code> <i>child fs mount link URI</i> </td>
|
|
<td align="left"> Its a logical mount link path to child file system in ViewFS world. This uri typically formed as src mount link prefixed with fs.defaultFS. Please note, this is not an actual child file system uri, instead its a logical mount link uri pointing to actual child file system</td></tr>
|
|
</tbody>
|
|
</table>
|
|
<p>Example command usage: <code>hdfs dfsadmin -fs hdfs://nn1 -safemode enter</code></p>
|
|
<p>In ViewFsOverloadScheme, we may have multiple child file systems as mount point mappings as shown in <a href="./ViewFsOverloadScheme.html">ViewFsOverloadScheme Guide</a>. Here -fs option is an optional generic parameter supported by dfsadmin. When users want to execute commands on one of the child file system, they need to pass that file system mount mapping link uri to -fs option. Let’s take an example mount link configuration and dfsadmin command below.</p>
|
|
<p>Mount link:</p>
|
|
|
|
<div class="source">
|
|
<div class="source">
|
|
<pre><property>
|
|
<name>fs.defaultFS</name>
|
|
<value>hdfs://MyCluster1</value>
|
|
</property>
|
|
|
|
<property>
|
|
<name>fs.viewfs.mounttable.MyCluster1./user</name>
|
|
<value>hdfs://MyCluster2/user</value>
|
|
<!-- mount table name : MyCluster1
|
|
mount link mapping: hdfs://MyCluster1/user --> hdfs://MyCluster2/user
|
|
mount link path: /user
|
|
mount link uri: hdfs://MyCluster1/user
|
|
mount target uri for /user: hdfs://MyCluster2/user -->
|
|
</property>
|
|
</pre></div></div>
|
|
|
|
<p>If user wants to talk to <code>hdfs://MyCluster2/</code>, then they can pass -fs option (<code>-fs hdfs://MyCluster1/user</code>) Since /user was mapped to a cluster <code>hdfs://MyCluster2/user</code>, dfsadmin resolve the passed (<code>-fs hdfs://MyCluster1/user</code>) to target fs (<code>hdfs://MyCluster2/user</code>). This way users can get the access to all hdfs child file systems in ViewFsOverloadScheme. If there is no <code>-fs</code> option provided, then it will try to connect to the configured fs.defaultFS cluster if a cluster running with the fs.defaultFS uri.</p></section>
|
|
</div>
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
<div id="footer">
|
|
<div class="xright">
|
|
© 2008-2023
|
|
Apache Software Foundation
|
|
|
|
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
|
|
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
|
|
</div>
|
|
<div class="clear">
|
|
<hr/>
|
|
</div>
|
|
</div>
|
|
</body>
|
|
</html>
|