hadoop/hadoop-project-dist/hadoop-common/CommandsManual.html

967 lines
56 KiB
HTML

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2023-03-25
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.4.0-SNAPSHOT &#x2013; Hadoop Commands Guide</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20230325" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2023-03-25
&nbsp;| Version: 3.4.0-SNAPSHOT
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
<li class="none">
<a href="../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Hadoop Commands Guide</h1>
<ul>
<li><a href="#Overview">Overview</a>
<ul>
<li><a href="#Shell_Options">Shell Options</a></li>
<li><a href="#Generic_Options">Generic Options</a></li></ul></li>
<li><a href="#User_Commands">User Commands</a>
<ul>
<li><a href="#archive">archive</a></li>
<li><a href="#checknative">checknative</a></li>
<li><a href="#classpath">classpath</a></li>
<li><a href="#conftest">conftest</a></li>
<li><a href="#credential">credential</a></li>
<li><a href="#distch">distch</a></li>
<li><a href="#distcp">distcp</a></li>
<li><a href="#dtutil">dtutil</a></li>
<li><a href="#fs">fs</a></li>
<li><a href="#gridmix">gridmix</a></li>
<li><a href="#jar">jar</a></li>
<li><a href="#jnipath">jnipath</a></li>
<li><a href="#kerbname">kerbname</a></li>
<li><a href="#kdiag">kdiag</a></li>
<li><a href="#key">key</a></li>
<li><a href="#kms">kms</a></li>
<li><a href="#version">version</a></li>
<li><a href="#CLASSNAME">CLASSNAME</a></li>
<li><a href="#envvars">envvars</a></li></ul></li>
<li><a href="#Administration_Commands">Administration Commands</a>
<ul>
<li><a href="#daemonlog">daemonlog</a></li></ul></li>
<li><a href="#Files">Files</a>
<ul>
<li><a href="#etc.2Fhadoop.2Fhadoop-env.sh">etc/hadoop/hadoop-env.sh</a></li>
<li><a href="#etc.2Fhadoop.2Fhadoop-user-functions.sh">etc/hadoop/hadoop-user-functions.sh</a></li>
<li><a href="#a.7E.2F.hadooprc">~/.hadooprc</a></li></ul></li></ul>
<section>
<h2><a name="Overview"></a>Overview</h2>
<p>All of the Hadoop commands and subprojects follow the same basic structure:</p>
<p>Usage: <code>shellcommand [SHELL_OPTIONS] [COMMAND] [GENERIC_OPTIONS] [COMMAND_OPTIONS]</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> FIELD </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> shellcommand </td>
<td align="left"> The command of the project being invoked. For example, Hadoop common uses <code>hadoop</code>, HDFS uses <code>hdfs</code>, and YARN uses <code>yarn</code>. </td></tr>
<tr class="a">
<td align="left"> SHELL_OPTIONS </td>
<td align="left"> Options that the shell processes prior to executing Java. </td></tr>
<tr class="b">
<td align="left"> COMMAND </td>
<td align="left"> Action to perform. </td></tr>
<tr class="a">
<td align="left"> GENERIC_OPTIONS </td>
<td align="left"> The common set of options supported by multiple commands. </td></tr>
<tr class="b">
<td align="left"> COMMAND_OPTIONS </td>
<td align="left"> Various commands with their options are described in this documention for the Hadoop common sub-project. HDFS and YARN are covered in other documents. </td></tr>
</tbody>
</table><section>
<h3><a name="Shell_Options"></a>Shell Options</h3>
<p>All of the shell commands will accept a common set of options. For some commands, these options are ignored. For example, passing <code>---hostnames</code> on a command that only executes on a single host will be ignored.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> SHELL_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>--buildpaths</code> </td>
<td align="left"> Enables developer versions of jars. </td></tr>
<tr class="a">
<td align="left"> <code>--config confdir</code> </td>
<td align="left"> Overwrites the default Configuration directory. Default is <code>$HADOOP_HOME/etc/hadoop</code>. </td></tr>
<tr class="b">
<td align="left"> <code>--daemon mode</code> </td>
<td align="left"> If the command supports daemonization (e.g., <code>hdfs namenode</code>), execute in the appropriate mode. Supported modes are <code>start</code> to start the process in daemon mode, <code>stop</code> to stop the process, and <code>status</code> to determine the active status of the process. <code>status</code> will return an <a class="externalLink" href="http://refspecs.linuxbase.org/LSB_3.0.0/LSB-generic/LSB-generic/iniscrptact.html">LSB-compliant</a> result code. If no option is provided, commands that support daemonization will run in the foreground. For commands that do not support daemonization, this option is ignored. </td></tr>
<tr class="a">
<td align="left"> <code>--debug</code> </td>
<td align="left"> Enables shell level configuration debugging information </td></tr>
<tr class="b">
<td align="left"> <code>--help</code> </td>
<td align="left"> Shell script usage information. </td></tr>
<tr class="a">
<td align="left"> <code>--hostnames</code> </td>
<td align="left"> When <code>--workers</code> is used, override the workers file with a space delimited list of hostnames where to execute a multi-host subcommand. If <code>--workers</code> is not used, this option is ignored. </td></tr>
<tr class="b">
<td align="left"> <code>--hosts</code> </td>
<td align="left"> When <code>--workers</code> is used, override the workers file with another file that contains a list of hostnames where to execute a multi-host subcommand. If <code>--workers</code> is not used, this option is ignored. </td></tr>
<tr class="a">
<td align="left"> <code>--loglevel loglevel</code> </td>
<td align="left"> Overrides the log level. Valid log levels are FATAL, ERROR, WARN, INFO, DEBUG, and TRACE. Default is INFO. </td></tr>
<tr class="b">
<td align="left"> <code>--workers</code> </td>
<td align="left"> If possible, execute this command on all hosts in the <code>workers</code> file. </td></tr>
</tbody>
</table></section><section>
<h3><a name="Generic_Options"></a>Generic Options</h3>
<p>Many subcommands honor a common set of configuration options to alter their behavior:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> GENERIC_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>-archives &lt;comma separated list of archives&gt;</code> </td>
<td align="left"> Specify comma separated archives to be unarchived on the compute machines. Applies only to job. </td></tr>
<tr class="a">
<td align="left"> <code>-conf &lt;configuration file&gt;</code> </td>
<td align="left"> Specify an application configuration file. </td></tr>
<tr class="b">
<td align="left"> <code>-D &lt;property&gt;=&lt;value&gt;</code> </td>
<td align="left"> Use value for given property. </td></tr>
<tr class="a">
<td align="left"> <code>-files &lt;comma separated list of files&gt;</code> </td>
<td align="left"> Specify comma separated files to be copied to the map reduce cluster. Applies only to job. </td></tr>
<tr class="b">
<td align="left"> <code>-fs &lt;file:///&gt; or &lt;hdfs://namenode:port&gt;</code> </td>
<td align="left"> Specify default filesystem URL to use. Overrides &#x2018;fs.defaultFS&#x2019; property from configurations. </td></tr>
<tr class="a">
<td align="left"> <code>-jt &lt;local&gt; or &lt;resourcemanager:port&gt;</code> </td>
<td align="left"> Specify a ResourceManager. Applies only to job. </td></tr>
<tr class="b">
<td align="left"> <code>-libjars &lt;comma separated list of jars&gt;</code> </td>
<td align="left"> Specify comma separated jar files to include in the classpath. Applies only to job. </td></tr>
</tbody>
</table>
<h1>Hadoop Common Commands</h1>
<p>All of these commands are executed from the <code>hadoop</code> shell command. They have been broken up into <a href="#User_Commands">User Commands</a> and <a href="#Administration_Commands">Administration Commands</a>.</p></section></section><section>
<h2><a name="User_Commands"></a>User Commands</h2>
<p>Commands useful for users of a hadoop cluster.</p><section>
<h3><a name="archive"></a><code>archive</code></h3>
<p>Creates a hadoop archive. More information can be found at <a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives Guide</a>.</p></section><section>
<h3><a name="checknative"></a><code>checknative</code></h3>
<p>Usage: <code>hadoop checknative [-a] [-h]</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>-a</code> </td>
<td align="left"> Check all libraries are available. </td></tr>
<tr class="a">
<td align="left"> <code>-h</code> </td>
<td align="left"> print help </td></tr>
</tbody>
</table>
<p>This command checks the availability of the Hadoop native code. See <a href="./NativeLibraries.html">Native Libaries</a> for more information. By default, this command only checks the availability of libhadoop.</p></section><section>
<h3><a name="classpath"></a><code>classpath</code></h3>
<p>Usage: <code>hadoop classpath [--glob |--jar &lt;path&gt; |-h |--help]</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>--glob</code> </td>
<td align="left"> expand wildcards </td></tr>
<tr class="a">
<td align="left"> <code>--jar</code> <i>path</i> </td>
<td align="left"> write classpath as manifest in jar named <i>path</i> </td></tr>
<tr class="b">
<td align="left"> <code>-h</code>, <code>--help</code> </td>
<td align="left"> print help </td></tr>
</tbody>
</table>
<p>Prints the class path needed to get the Hadoop jar and the required libraries. If called without arguments, then prints the classpath set up by the command scripts, which is likely to contain wildcards in the classpath entries. Additional options print the classpath after wildcard expansion or write the classpath into the manifest of a jar file. The latter is useful in environments where wildcards cannot be used and the expanded classpath exceeds the maximum supported command line length.</p></section><section>
<h3><a name="conftest"></a><code>conftest</code></h3>
<p>Usage: <code>hadoop conftest [-conffile &lt;path&gt;]...</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>-conffile</code> </td>
<td align="left"> Path of a configuration file or directory to validate </td></tr>
<tr class="a">
<td align="left"> <code>-h</code>, <code>--help</code> </td>
<td align="left"> print help </td></tr>
</tbody>
</table>
<p>Validates configuration XML files. If the <code>-conffile</code> option is not specified, the files in <code>${HADOOP_CONF_DIR}</code> whose name end with .xml will be verified. If specified, that path will be verified. You can specify either a file or directory, and if a directory specified, the files in that directory whose name end with <code>.xml</code> will be verified. You can specify <code>-conffile</code> option multiple times.</p>
<p>The validation is fairly minimal: the XML is parsed and duplicate and empty property names are checked for. The command does not support XInclude; if you using that to pull in configuration items, it will declare the XML file invalid.</p></section><section>
<h3><a name="credential"></a><code>credential</code></h3>
<p>Usage: <code>hadoop credential &lt;subcommand&gt; [options]</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> create <i>alias</i> [-provider <i>provider-path</i>] [-strict] [-value <i>credential-value</i>] </td>
<td align="left"> Prompts the user for a credential to be stored as the given alias. The <i>hadoop.security.credential.provider.path</i> within the core-site.xml file will be used unless a <code>-provider</code> is indicated. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. Use <code>-value</code> flag to supply the credential value (a.k.a. the alias password) instead of being prompted. </td></tr>
<tr class="a">
<td align="left"> delete <i>alias</i> [-provider <i>provider-path</i>] [-strict] [-f] </td>
<td align="left"> Deletes the credential with the provided alias. The <i>hadoop.security.credential.provider.path</i> within the core-site.xml file will be used unless a <code>-provider</code> is indicated. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. The command asks for confirmation unless <code>-f</code> is specified </td></tr>
<tr class="b">
<td align="left"> list [-provider <i>provider-path</i>] [-strict] </td>
<td align="left"> Lists all of the credential aliases The <i>hadoop.security.credential.provider.path</i> within the core-site.xml file will be used unless a <code>-provider</code> is indicated. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. </td></tr>
<tr class="a">
<td align="left"> check <i>alias</i> [-provider <i>provider-path</i>] [-strict] </td>
<td align="left"> Check the password for the given alias. The <i>hadoop.security.credential.provider.path</i> within the core-site.xml file will be used unless a <code>-provider</code> is indicated. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. </td></tr>
</tbody>
</table>
<p>Command to manage credentials, passwords and secrets within credential providers.</p>
<p>The CredentialProvider API in Hadoop allows for the separation of applications and how they store their required passwords/secrets. In order to indicate a particular provider type and location, the user must provide the <i>hadoop.security.credential.provider.path</i> configuration element in core-site.xml or use the command line option <code>-provider</code> on each of the following commands. This provider path is a comma-separated list of URLs that indicates the type and location of a list of providers that should be consulted. For example, the following path: <code>user:///,jceks://file/tmp/test.jceks,jceks://hdfs@nn1.example.com/my/path/test.jceks</code></p>
<p>indicates that the current user&#x2019;s credentials file should be consulted through the User Provider, that the local file located at <code>/tmp/test.jceks</code> is a Java Keystore Provider and that the file located within HDFS at <code>nn1.example.com/my/path/test.jceks</code> is also a store for a Java Keystore Provider.</p>
<p>When utilizing the credential command it will often be for provisioning a password or secret to a particular credential store provider. In order to explicitly indicate which provider store to use the <code>-provider</code> option should be used. Otherwise, given a path of multiple providers, the first non-transient provider will be used. This may or may not be the one that you intended.</p>
<p>Providers frequently require that a password or other secret is supplied. If the provider requires a password and is unable to find one, it will use a default password and emit a warning message that the default password is being used. If the <code>-strict</code> flag is supplied, the warning message becomes an error message and the command returns immediately with an error status.</p>
<p>Example: <code>hadoop credential list -provider jceks://file/tmp/test.jceks</code></p></section><section>
<h3><a name="distch"></a><code>distch</code></h3>
<p>Usage: <code>hadoop distch [-f urilist_url] [-i] [-log logdir] path:owner:group:permissions</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>-f</code> </td>
<td align="left"> List of objects to change </td></tr>
<tr class="a">
<td align="left"> <code>-i</code> </td>
<td align="left"> Ignore failures </td></tr>
<tr class="b">
<td align="left"> <code>-log</code> </td>
<td align="left"> Directory to log output </td></tr>
</tbody>
</table>
<p>Change the ownership and permissions on many files at once.</p></section><section>
<h3><a name="distcp"></a><code>distcp</code></h3>
<p>Copy file or directories recursively. More information can be found at <a href="../../hadoop-distcp/DistCp.html">Hadoop DistCp Guide</a>.</p></section><section>
<h3><a name="dtutil"></a><code>dtutil</code></h3>
<p>Usage: <code>hadoop dtutil [-keytab</code> <i>keytab_file</i> <code>-principal</code> <i>principal_name</i> <code>]</code> <i>subcommand</i> <code>[-format (java|protobuf)] [-alias</code> <i>alias</i> <code>] [-renewer</code> <i>renewer</i> <code>]</code> <i>filename&#x2026;</i></p>
<p>Utility to fetch and manage hadoop delegation tokens inside credentials files. It is intended to replace the simpler command <code>fetchdt</code>. There are multiple subcommands, each with their own flags and options.</p>
<p>For every subcommand that writes out a file, the <code>-format</code> option will specify the internal format to use. <code>java</code> is the legacy format that matches <code>fetchdt</code>. The default is <code>protobuf</code>.</p>
<p>For every subcommand that connects to a service, convenience flags are provided to specify the kerberos principal name and keytab file to use for auth.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> SUBCOMMAND </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>print</code> <br />&#xa0;&#xa0; <code>[-alias</code> <i>alias</i> <code>]</code> <br />&#xa0;&#xa0; <i>filename</i> <code>[</code> <i>filename2</i> <code>...]</code> </td>
<td align="left"> Print out the fields in the tokens contained in <i>filename</i> (and <i>filename2</i> &#x2026;). <br /> If <i>alias</i> is specified, print only tokens matching <i>alias</i>. Otherwise, print all tokens. </td></tr>
<tr class="a">
<td align="left"> <code>get</code> <i>URL</i> <br />&#xa0;&#xa0; <code>[-service</code> <i>scheme</i> <code>]</code> <br />&#xa0;&#xa0; <code>[-format (java|protobuf)]</code> <br />&#xa0;&#xa0; <code>[-alias</code> <i>alias</i> <code>]</code> <br />&#xa0;&#xa0; <code>[-renewer</code> <i>renewer</i> <code>]</code> <br />&#xa0;&#xa0; <i>filename</i> </td>
<td align="left"> Fetch a token from service at <i>URL</i> and place it in <i>filename</i>. <br /> <i>URL</i> is required and must immediately follow <code>get</code>.<br /> <i>URL</i> is the service URL, e.g. <i>hdfs://localhost:9000</i>. <br /> <i>alias</i> will overwrite the service field in the token. <br /> It is intended for hosts that have external and internal names, e.g. <i>firewall.com:14000</i>. <br /> <i>filename</i> should come last and is the name of the token file. <br /> It will be created if it does not exist. Otherwise, token(s) are added to existing file. <br /> The <code>-service</code> flag should only be used with a URL which starts with <code>http</code> or <code>https</code>. <br /> The following are equivalent: <i>hdfs://localhost:9000/</i> vs. <i>http://localhost:9000</i> <code>-service</code> <i>hdfs</i> </td></tr>
<tr class="b">
<td align="left"> <code>append</code> <br />&#xa0;&#xa0; <code>[-format (java|protobuf)]</code> <br />&#xa0;&#xa0; <i>filename</i> <i>filename2</i> <code>[</code> <i>filename3</i> <code>...]</code> </td>
<td align="left"> Append the contents of the first N filenames onto the last filename. <br /> When tokens with common service fields are present in multiple files, earlier files&#x2019; tokens are overwritten. <br /> That is, tokens present in the last file are always preserved. </td></tr>
<tr class="a">
<td align="left"> <code>remove -alias</code> <i>alias</i> <br />&#xa0;&#xa0; <code>[-format (java|protobuf)]</code> <br />&#xa0;&#xa0; <i>filename</i> <code>[</code> <i>filename2</i> <code>...]</code> </td>
<td align="left"> From each file specified, remove the tokens matching <i>alias</i> and write out each file using specified format. <br /> <i>alias</i> must be specified. </td></tr>
<tr class="b">
<td align="left"> <code>cancel -alias</code> <i>alias</i> <br />&#xa0;&#xa0; <code>[-format (java|protobuf)]</code> <br />&#xa0;&#xa0; <i>filename</i> <code>[</code> <i>filename2</i> <code>...]</code> </td>
<td align="left"> Just like <code>remove</code>, except the tokens are also cancelled using the service specified in the token object. <br /> <i>alias</i> must be specified. </td></tr>
<tr class="a">
<td align="left"> <code>renew -alias</code> <i>alias</i> <br />&#xa0;&#xa0; <code>[-format (java|protobuf)]</code> <br />&#xa0;&#xa0; <i>filename</i> <code>[</code> <i>filename2</i> <code>...]</code> </td>
<td align="left"> For each file specified, renew the tokens matching <i>alias</i> and write out each file using specified format. <br /> <i>alias</i> must be specified. </td></tr>
<tr class="b">
<td align="left"> <code>import</code> <i>base64</i> <br />&#xa0;&#xa0; <code>[-alias</code> <i>alias</i> <code>]</code> <br />&#xa0;&#xa0; <i>filename</i> </td>
<td align="left"> Import a token from a base64 token. <br /> <i>alias</i> will overwrite the service field in the token. </td></tr>
</tbody>
</table></section><section>
<h3><a name="fs"></a><code>fs</code></h3>
<p>This command is documented in the <a href="./FileSystemShell.html">File System Shell Guide</a>. It is a synonym for <code>hdfs dfs</code> when HDFS is in use.</p></section><section>
<h3><a name="gridmix"></a><code>gridmix</code></h3>
<p>Gridmix is a benchmark tool for Hadoop cluster. More information can be found in the <a href="../../hadoop-gridmix/GridMix.html">Gridmix Guide</a>.</p></section><section>
<h3><a name="jar"></a><code>jar</code></h3>
<p>Usage: <code>hadoop jar &lt;jar&gt; [mainClass] args...</code></p>
<p>Runs a jar file.</p>
<p>Use <a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html#jar"><code>yarn jar</code></a> to launch YARN applications instead.</p></section><section>
<h3><a name="jnipath"></a><code>jnipath</code></h3>
<p>Usage: <code>hadoop jnipath</code></p>
<p>Print the computed java.library.path.</p></section><section>
<h3><a name="kerbname"></a><code>kerbname</code></h3>
<p>Usage: <code>hadoop kerbname principal</code></p>
<p>Convert the named principal via the auth_to_local rules to the Hadoop user name.</p>
<p>Example: <code>hadoop kerbname user@EXAMPLE.COM</code></p></section><section>
<h3><a name="kdiag"></a><code>kdiag</code></h3>
<p>Usage: <code>hadoop kdiag</code></p>
<p>Diagnose Kerberos Problems</p></section><section>
<h3><a name="key"></a><code>key</code></h3>
<p>Usage: <code>hadoop key &lt;subcommand&gt; [options]</code></p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> create <i>keyname</i> [-cipher <i>cipher</i>] [-size <i>size</i>] [-description <i>description</i>] [-attr <i>attribute=value</i>] [-provider <i>provider</i>] [-strict] [-help] </td>
<td align="left"> Creates a new key for the name specified by the <i>keyname</i> argument within the provider specified by the <code>-provider</code> argument. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. You may specify a cipher with the <code>-cipher</code> argument. The default cipher is currently &#x201c;AES/CTR/NoPadding&#x201d;. The default keysize is 128. You may specify the requested key length using the <code>-size</code> argument. Arbitrary attribute=value style attributes may be specified using the <code>-attr</code> argument. <code>-attr</code> may be specified multiple times, once per attribute. </td></tr>
<tr class="a">
<td align="left"> roll <i>keyname</i> [-provider <i>provider</i>] [-strict] [-help] </td>
<td align="left"> Creates a new version for the specified key within the provider indicated using the <code>-provider</code> argument. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. </td></tr>
<tr class="b">
<td align="left"> delete <i>keyname</i> [-provider <i>provider</i>] [-strict] [-f] [-help] </td>
<td align="left"> Deletes all versions of the key specified by the <i>keyname</i> argument from within the provider specified by <code>-provider</code>. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. The command asks for user confirmation unless <code>-f</code> is specified. </td></tr>
<tr class="a">
<td align="left"> list [-provider <i>provider</i>] [-strict] [-metadata] [-help] </td>
<td align="left"> Displays the keynames contained within a particular provider as configured in core-site.xml or specified with the <code>-provider</code> argument. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. <code>-metadata</code> displays the metadata. </td></tr>
<tr class="b">
<td align="left"> check <i>keyname</i> [-provider <i>provider</i>] [-strict] [-help] </td>
<td align="left"> Check password of the <i>keyname</i> contained within a particular provider as configured in core-site.xml or specified with the <code>-provider</code> argument. The <code>-strict</code> flag will cause the command to fail if the provider uses a default password. </td></tr>
</tbody>
</table>
<p>| -help | Prints usage of this command |</p>
<p>Manage keys via the KeyProvider. For details on KeyProviders, see the <a href="../hadoop-hdfs/TransparentEncryption.html">Transparent Encryption Guide</a>.</p>
<p>Providers frequently require that a password or other secret is supplied. If the provider requires a password and is unable to find one, it will use a default password and emit a warning message that the default password is being used. If the <code>-strict</code> flag is supplied, the warning message becomes an error message and the command returns immediately with an error status.</p>
<p>NOTE: Some KeyProviders (e.g. org.apache.hadoop.crypto.key.JavaKeyStoreProvider) do not support uppercase key names.</p>
<p>NOTE: Some KeyProviders do not directly execute a key deletion (e.g. performs a soft-delete instead, or delay the actual deletion, to prevent mistake). In these cases, one may encounter errors when creating/deleting a key with the same name after deleting it. Please check the underlying KeyProvider for details.</p></section><section>
<h3><a name="kms"></a><code>kms</code></h3>
<p>Usage: <code>hadoop kms</code></p>
<p>Run KMS, the Key Management Server.</p></section><section>
<h3><a name="version"></a><code>version</code></h3>
<p>Usage: <code>hadoop version</code></p>
<p>Prints the version.</p></section><section>
<h3><a name="CLASSNAME"></a><code>CLASSNAME</code></h3>
<p>Usage: <code>hadoop CLASSNAME</code></p>
<p>Runs the class named <code>CLASSNAME</code>. The class must be part of a package.</p></section><section>
<h3><a name="envvars"></a><code>envvars</code></h3>
<p>Usage: <code>hadoop envvars</code></p>
<p>Display computed Hadoop environment variables.</p></section></section><section>
<h2><a name="Administration_Commands"></a>Administration Commands</h2>
<p>Commands useful for administrators of a hadoop cluster.</p><section>
<h3><a name="daemonlog"></a><code>daemonlog</code></h3>
<p>Usage:</p>
<div class="source">
<div class="source">
<pre>hadoop daemonlog -getlevel &lt;host:port&gt; &lt;classname&gt; [-protocol (http|https)]
hadoop daemonlog -setlevel &lt;host:port&gt; &lt;classname&gt; &lt;level&gt; [-protocol (http|https)]
</pre></div></div>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> COMMAND_OPTION </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>-getlevel</code> <i>host:port</i> <i>classname</i> [-protocol (http|https)] </td>
<td align="left"> Prints the log level of the log identified by a qualified <i>classname</i>, in the daemon running at <i>host:port</i>. The <code>-protocol</code> flag specifies the protocol for connection. </td></tr>
<tr class="a">
<td align="left"> <code>-setlevel</code> <i>host:port</i> <i>classname</i> <i>level</i> [-protocol (http|https)] </td>
<td align="left"> Sets the log level of the log identified by a qualified <i>classname</i>, in the daemon running at <i>host:port</i>. The <code>-protocol</code> flag specifies the protocol for connection. </td></tr>
</tbody>
</table>
<p>Get/Set the log level for a Log identified by a qualified class name in the daemon dynamically. By default, the command sends a HTTP request, but this can be overridden by using argument <code>-protocol https</code> to send a HTTPS request.</p>
<p>Example:</p>
<div class="source">
<div class="source">
<pre>$ bin/hadoop daemonlog -setlevel 127.0.0.1:9870 org.apache.hadoop.hdfs.server.namenode.NameNode DEBUG
$ bin/hadoop daemonlog -getlevel 127.0.0.1:9871 org.apache.hadoop.hdfs.server.namenode.NameNode -protocol https
</pre></div></div>
<p>Note that the setting is not permanent and will be reset when the daemon is restarted. This command works by sending a HTTP/HTTPS request to the daemon&#x2019;s internal Jetty servlet, so it supports the following daemons:</p>
<ul>
<li>Common
<ul>
<li>key management server</li>
</ul>
</li>
<li>HDFS
<ul>
<li>name node</li>
<li>secondary name node</li>
<li>data node</li>
<li>journal node</li>
<li>HttpFS server</li>
</ul>
</li>
<li>YARN
<ul>
<li>resource manager</li>
<li>node manager</li>
<li>Timeline server</li>
</ul>
</li>
</ul></section></section><section>
<h2><a name="Files"></a>Files</h2><section>
<h3><a name="etc.2Fhadoop.2Fhadoop-env.sh"></a><b>etc/hadoop/hadoop-env.sh</b></h3>
<p>This file stores the global settings used by all Hadoop shell commands.</p></section><section>
<h3><a name="etc.2Fhadoop.2Fhadoop-user-functions.sh"></a><b>etc/hadoop/hadoop-user-functions.sh</b></h3>
<p>This file allows for advanced users to override some shell functionality.</p></section><section>
<h3><a name="a.7E.2F.hadooprc"></a><b>~/.hadooprc</b></h3>
<p>This stores the personal environment for an individual user. It is processed after the hadoop-env.sh and hadoop-user-functions.sh files and can contain the same settings.</p></section></section>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2023
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>