hadoop/hadoop-project-dist/hadoop-common/SecureMode.html

1451 lines
110 KiB
HTML

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2023-02-23
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.4.0-SNAPSHOT &#x2013; Hadoop in Secure Mode</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20230223" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2023-02-23
&nbsp;| Version: 3.4.0-SNAPSHOT
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
<li class="none">
<a href="../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Hadoop in Secure Mode</h1>
<ul>
<li><a href="#Introduction">Introduction</a></li>
<li><a href="#Authentication">Authentication</a>
<ul>
<li><a href="#End_User_Accounts">End User Accounts</a></li>
<li><a href="#User_Accounts_for_Hadoop_Daemons">User Accounts for Hadoop Daemons</a></li>
<li><a href="#Kerberos_principals_for_Hadoop_Daemons">Kerberos principals for Hadoop Daemons</a>
<ul>
<li><a href="#HDFS">HDFS</a></li>
<li><a href="#YARN">YARN</a></li>
<li><a href="#MapReduce_JobHistory_Server">MapReduce JobHistory Server</a></li></ul></li>
<li><a href="#Mapping_from_Kerberos_principals_to_OS_user_accounts">Mapping from Kerberos principals to OS user accounts</a></li>
<li><a href="#Example_rules">Example rules</a></li>
<li><a href="#Mapping_from_user_to_group">Mapping from user to group</a></li>
<li><a href="#Proxy_user">Proxy user</a></li>
<li><a href="#Secure_DataNode">Secure DataNode</a></li></ul></li>
<li><a href="#Data_confidentiality">Data confidentiality</a>
<ul>
<li><a href="#Data_Encryption_on_RPC">Data Encryption on RPC</a></li>
<li><a href="#Data_Encryption_on_Block_data_transfer.">Data Encryption on Block data transfer.</a></li>
<li><a href="#Data_Encryption_on_HTTP">Data Encryption on HTTP</a></li></ul></li>
<li><a href="#Configuration">Configuration</a>
<ul>
<li><a href="#Permissions_for_both_HDFS_and_local_fileSystem_paths">Permissions for both HDFS and local fileSystem paths</a></li>
<li><a href="#Common_Configurations">Common Configurations</a></li>
<li><a href="#NameNode">NameNode</a></li>
<li><a href="#Secondary_NameNode">Secondary NameNode</a></li>
<li><a href="#JournalNode">JournalNode</a></li>
<li><a href="#DataNode">DataNode</a></li>
<li><a href="#WebHDFS">WebHDFS</a></li>
<li><a href="#ResourceManager">ResourceManager</a></li>
<li><a href="#NodeManager">NodeManager</a></li>
<li><a href="#Configuration_for_WebAppProxy">Configuration for WebAppProxy</a></li>
<li><a href="#LinuxContainerExecutor">LinuxContainerExecutor</a></li>
<li><a href="#MapReduce_JobHistory_Server">MapReduce JobHistory Server</a></li></ul></li>
<li><a href="#Multihoming">Multihoming</a></li>
<li><a href="#Troubleshooting">Troubleshooting</a></li>
<li><a href="#Troubleshooting_with_KDiag">Troubleshooting with KDiag</a>
<ul>
<li><a href="#Usage">Usage</a>
<ul>
<li><a href="#a--jaas:_Require_a_JAAS_file_to_be_defined_in_java.security.auth.login.config.">--jaas: Require a JAAS file to be defined in java.security.auth.login.config.</a></li>
<li><a href="#a--keylen_.3Clength.3E:_Require_a_minimum_size_for_encryption_keys_supported_by_the_JVM.22.">--keylen &lt;length&gt;: Require a minimum size for encryption keys supported by the JVM&quot;.</a></li>
<li><a href="#a--keytab_.3Ckeytab.3E_--principal_.3Cprincipal.3E:_Log_in_from_a_keytab.">--keytab &lt;keytab&gt; --principal &lt;principal&gt;: Log in from a keytab.</a></li>
<li><a href="#a--nofail_:_Do_not_fail_on_the_first_problem">--nofail : Do not fail on the first problem</a></li>
<li><a href="#a--nologin:_Do_not_attempt_to_log_in.">--nologin: Do not attempt to log in.</a></li>
<li><a href="#a--out_outfile:_Write_output_to_file.">--out outfile: Write output to file.</a></li>
<li><a href="#a--resource_.3Cresource.3E_:_XML_configuration_resource_to_load.">--resource &lt;resource&gt; : XML configuration resource to load.</a></li>
<li><a href="#a--secure:_Fail_if_the_command_is_not_executed_on_a_secure_cluster.">--secure: Fail if the command is not executed on a secure cluster.</a></li>
<li><a href="#a--verifyshortname_.3Cprincipal.3E:_validate_the_short_name_of_a_principal">--verifyshortname &lt;principal&gt;: validate the short name of a principal</a></li></ul></li>
<li><a href="#Example">Example</a></li></ul></li>
<li><a href="#References">References</a></li></ul>
<section>
<h2><a name="Introduction"></a>Introduction</h2>
<p>In its default configuration, we expect you to make sure attackers don&#x2019;t have access to your Hadoop cluster by restricting all network access. If you want any restrictions on who can remotely access data or submit work, you MUST secure authentication and access for your Hadoop cluster as described in this document.</p>
<p>When Hadoop is configured to run in secure mode, each Hadoop service and each user must be authenticated by Kerberos.</p>
<p>Forward and reverse host lookup for all service hosts must be configured correctly to allow services to authenticate with each other. Host lookups may be configured using either DNS or <code>/etc/hosts</code> files. Working knowledge of Kerberos and DNS is recommended before attempting to configure Hadoop services in Secure Mode.</p>
<p>Security features of Hadoop consist of <a href="#Authentication">Authentication</a>, <a href="./ServiceLevelAuth.html">Service Level Authorization</a>, <a href="./HttpAuthentication.html">Authentication for Web Consoles</a> and <a href="#Data_confidentiality">Data Confidentiality</a>.</p></section><section>
<h2><a name="Authentication"></a>Authentication</h2><section>
<h3><a name="End_User_Accounts"></a>End User Accounts</h3>
<p>When service level authentication is turned on, end users must authenticate themselves before interacting with Hadoop services. The simplest way is for a user to authenticate interactively using the <a class="externalLink" href="http://web.mit.edu/kerberos/krb5-1.12/doc/user/user_commands/kinit.html" title="MIT Kerberos Documentation of kinit">Kerberos <code>kinit</code> command</a>. Programmatic authentication using Kerberos keytab files may be used when interactive login with <code>kinit</code> is infeasible.</p></section><section>
<h3><a name="User_Accounts_for_Hadoop_Daemons"></a>User Accounts for Hadoop Daemons</h3>
<p>Ensure that HDFS and YARN daemons run as different Unix users, e.g. <code>hdfs</code> and <code>yarn</code>. Also, ensure that the MapReduce JobHistory server runs as different user such as <code>mapred</code>.</p>
<p>It&#x2019;s recommended to have them share a Unix group, e.g. <code>hadoop</code>. See also &#x201c;<a href="#Mapping_from_user_to_group">Mapping from user to group</a>&#x201d; for group management.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> User:Group </th>
<th align="left"> Daemons </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> hdfs:hadoop </td>
<td align="left"> NameNode, Secondary NameNode, JournalNode, DataNode </td></tr>
<tr class="a">
<td align="left"> yarn:hadoop </td>
<td align="left"> ResourceManager, NodeManager </td></tr>
<tr class="b">
<td align="left"> mapred:hadoop </td>
<td align="left"> MapReduce JobHistory Server </td></tr>
</tbody>
</table></section><section>
<h3><a name="Kerberos_principals_for_Hadoop_Daemons"></a>Kerberos principals for Hadoop Daemons</h3>
<p>Each Hadoop Service instance must be configured with its Kerberos principal and keytab file location.</p>
<p>The general format of a Service principal is <code>ServiceName/_HOST@REALM.TLD</code>. e.g. <code>dn/_HOST@EXAMPLE.COM</code>.</p>
<p>Hadoop simplifies the deployment of configuration files by allowing the hostname component of the service principal to be specified as the <code>_HOST</code> wildcard. Each service instance will substitute <code>_HOST</code> with its own fully qualified hostname at runtime. This allows administrators to deploy the same set of configuration files on all nodes. However, the keytab files will be different.</p><section>
<h4><a name="HDFS"></a>HDFS</h4>
<p>The NameNode keytab file, on each NameNode host, should look like the following:</p>
<div class="source">
<div class="source">
<pre>$ klist -e -k -t /etc/security/keytab/nn.service.keytab
Keytab name: FILE:/etc/security/keytab/nn.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
</pre></div></div>
<p>The Secondary NameNode keytab file, on that host, should look like the following:</p>
<div class="source">
<div class="source">
<pre>$ klist -e -k -t /etc/security/keytab/sn.service.keytab
Keytab name: FILE:/etc/security/keytab/sn.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 sn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
</pre></div></div>
<p>The DataNode keytab file, on each host, should look like the following:</p>
<div class="source">
<div class="source">
<pre>$ klist -e -k -t /etc/security/keytab/dn.service.keytab
Keytab name: FILE:/etc/security/keytab/dn.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 dn/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
</pre></div></div>
</section><section>
<h4><a name="YARN"></a>YARN</h4>
<p>The ResourceManager keytab file, on the ResourceManager host, should look like the following:</p>
<div class="source">
<div class="source">
<pre>$ klist -e -k -t /etc/security/keytab/rm.service.keytab
Keytab name: FILE:/etc/security/keytab/rm.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 rm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
</pre></div></div>
<p>The NodeManager keytab file, on each host, should look like the following:</p>
<div class="source">
<div class="source">
<pre>$ klist -e -k -t /etc/security/keytab/nm.service.keytab
Keytab name: FILE:/etc/security/keytab/nm.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nm/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
</pre></div></div>
</section><section>
<h4><a name="MapReduce_JobHistory_Server"></a>MapReduce JobHistory Server</h4>
<p>The MapReduce JobHistory Server keytab file, on that host, should look like the following:</p>
<div class="source">
<div class="source">
<pre>$ klist -e -k -t /etc/security/keytab/jhs.service.keytab
Keytab name: FILE:/etc/security/keytab/jhs.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 jhs/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/full.qualified.domain.name@REALM.TLD (ArcFour with HMAC/md5)
</pre></div></div>
</section></section><section>
<h3><a name="Mapping_from_Kerberos_principals_to_OS_user_accounts"></a>Mapping from Kerberos principals to OS user accounts</h3>
<p>Hadoop maps Kerberos principals to OS user (system) accounts using rules specified by <code>hadoop.security.auth_to_local</code>. How Hadoop evaluates these rules is determined by the setting of <code>hadoop.security.auth_to_local.mechanism</code>.</p>
<p>In the default <code>hadoop</code> mode a Kerberos principal <i>must</i> be matched against a rule that transforms the principal to a simple form, i.e. a user account name without &#x2018;@&#x2019; or &#x2018;/&#x2019;, otherwise a principal will not be authorized and a error will be logged. In case of the <code>MIT</code> mode the rules work in the same way as the <code>auth_to_local</code> in <a class="externalLink" href="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos configuration file (krb5.conf)</a> and the restrictions of <code>hadoop</code> mode do <i>not</i> apply. If you use <code>MIT</code> mode it is suggested to use the same <code>auth_to_local</code> rules that are specified in your /etc/krb5.conf as part of your default realm and keep them in sync. In both <code>hadoop</code> and <code>MIT</code> mode the rules are being applied (with the exception of <code>DEFAULT</code>) to <i>all</i> principals regardless of their specified realm. Also, note you should <i>not</i> rely on the <code>auth_to_local</code> rules as an ACL and use proper (OS) mechanisms.</p>
<p>Possible values for <code>auth_to_local</code> are:</p>
<ul>
<li>
<p><code>RULE:exp</code> The local name will be formulated from exp. The format for exp is <code>[n:string](regexp)s/pattern/replacement/g</code>. The integer n indicates how many components the target principal should have. If this matches, then a string will be formed from string, substituting the realm of the principal for <code>$0</code> and the n&#x2019;th component of the principal for <code>$n</code> (e.g., if the principal was johndoe/admin then <code>[2:$2$1foo]</code> would result in the string <code>adminjohndoefoo</code>). If this string matches regexp, then the <code>s//[g]</code> substitution command will be run over the string. The optional g will cause the substitution to be global over the string, instead of replacing only the first match in the string. As an extension to MIT, Hadoop <code>auth_to_local</code> mapping supports the <b>/L</b> flag that lowercases the returned name.</p>
</li>
<li>
<p><code>DEFAULT</code> Picks the first component of the principal name as the system user name if and only if the realm matches the <code>default_realm</code> (usually defined in /etc/krb5.conf). e.g. The default rule maps the principal <code>host/full.qualified.domain.name@MYREALM.TLD</code> to system user <code>host</code> if the default realm is <code>MYREALM.TLD</code>.</p>
</li>
</ul>
<p>In case no rules are specified Hadoop defaults to using <code>DEFAULT</code>, which is probably <i>not suitable</i> to most of the clusters.</p>
<p>Please note that Hadoop does not support multiple default realms (e.g like Heimdal does). Also, Hadoop does not do a verification on mapping whether a local system account exists.</p></section><section>
<h3><a name="Example_rules"></a>Example rules</h3>
<p>In a typical cluster HDFS and YARN services will be launched as the system <code>hdfs</code> and <code>yarn</code> users respectively. <code>hadoop.security.auth_to_local</code> can be configured as follows:</p>
<div class="source">
<div class="source">
<pre>&lt;property&gt;
&lt;name&gt;hadoop.security.auth_to_local&lt;/name&gt;
&lt;value&gt;
RULE:[2:$1/$2@$0]([ndj]n/.*@REALM.\TLD)s/.*/hdfs/
RULE:[2:$1/$2@$0]([rn]m/.*@REALM\.TLD)s/.*/yarn/
RULE:[2:$1/$2@$0](jhs/.*@REALM\.TLD)s/.*/mapred/
DEFAULT
&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<p>This would map any principal <code>nn, dn, jn</code> on any <code>host</code> from realm <code>REALM.TLD</code> to the local system account <code>hdfs</code>. Secondly it would map any principal <code>rm, nm</code> on any <code>host</code> from <code>REALM.TLD</code> to the local system account <code>yarn</code>. Thirdly, it would map the principal <code>jhs</code> on any <code>host</code> from realm <code>REALM.TLD</code> to the local system account <code>mapred</code>. Finally, any principal on any host from the default realm will be mapped to the user component of that principal.</p>
<p>Custom rules can be tested using the <code>hadoop kerbname</code> command. This command allows one to specify a principal and apply Hadoop&#x2019;s current <code>auth_to_local</code> ruleset.</p></section><section>
<h3><a name="Mapping_from_user_to_group"></a>Mapping from user to group</h3>
<p>The system user to system group mapping mechanism can be configured via <code>hadoop.security.group.mapping</code>. See <a href="GroupsMapping.html">Hadoop Groups Mapping</a> for details.</p>
<p>Practically you need to manage SSO environment using Kerberos with LDAP for Hadoop in secure mode.</p></section><section>
<h3><a name="Proxy_user"></a>Proxy user</h3>
<p>Some products such as Apache Oozie which access the services of Hadoop on behalf of end users need to be able to impersonate end users. See <a href="./Superusers.html">the doc of proxy user</a> for details.</p></section><section>
<h3><a name="Secure_DataNode"></a>Secure DataNode</h3>
<p>Because the DataNode data transfer protocol does not use the Hadoop RPC framework, DataNodes must authenticate themselves using privileged ports which are specified by <code>dfs.datanode.address</code> and <code>dfs.datanode.http.address</code>. This authentication is based on the assumption that the attacker won&#x2019;t be able to get root privileges on DataNode hosts.</p>
<p>When you execute the <code>hdfs datanode</code> command as root, the server process binds privileged ports at first, then drops privilege and runs as the user account specified by <code>HDFS_DATANODE_SECURE_USER</code>. This startup process uses <a class="externalLink" href="https://commons.apache.org/proper/commons-daemon/jsvc.html" title="Link to Apache Commons Jsvc">the jsvc program</a> installed to <code>JSVC_HOME</code>. You must specify <code>HDFS_DATANODE_SECURE_USER</code> and <code>JSVC_HOME</code> as environment variables on start up (in <code>hadoop-env.sh</code>).</p>
<p>As of version 2.6.0, SASL can be used to authenticate the data transfer protocol. In this configuration, it is no longer required for secured clusters to start the DataNode as root using <code>jsvc</code> and bind to privileged ports. To enable SASL on data transfer protocol, set <code>dfs.data.transfer.protection</code> in hdfs-site.xml. A SASL enabled DataNode can be started in secure mode in following two ways: 1. Set a non-privileged port for <code>dfs.datanode.address</code>. 1. Set <code>dfs.http.policy</code> to <code>HTTPS_ONLY</code> or set <code>dfs.datanode.http.address</code> to a privileged port and make sure the <code>HDFS_DATANODE_SECURE_USER</code> and <code>JSVC_HOME</code> environment variables are specified properly as environment variables on start up (in <code>hadoop-env.sh</code>).</p>
<p>In order to migrate an existing cluster that used root authentication to start using SASL instead, first ensure that version 2.6.0 or later has been deployed to all cluster nodes as well as any external applications that need to connect to the cluster. Only versions 2.6.0 and later of the HDFS client can connect to a DataNode that uses SASL for authentication of data transfer protocol, so it is vital that all callers have the correct version before migrating. After version 2.6.0 or later has been deployed everywhere, update configuration of any external applications to enable SASL. If an HDFS client is enabled for SASL, then it can connect successfully to a DataNode running with either root authentication or SASL authentication. Changing configuration for all clients guarantees that subsequent configuration changes on DataNodes will not disrupt the applications. Finally, each individual DataNode can be migrated by changing its configuration and restarting. It is acceptable to have a mix of some DataNodes running with root authentication and some DataNodes running with SASL authentication temporarily during this migration period, because an HDFS client enabled for SASL can connect to both.</p></section></section><section>
<h2><a name="Data_confidentiality"></a>Data confidentiality</h2><section>
<h3><a name="Data_Encryption_on_RPC"></a>Data Encryption on RPC</h3>
<p>The data transfered between hadoop services and clients can be encrypted on the wire. Setting <code>hadoop.rpc.protection</code> to <code>privacy</code> in <code>core-site.xml</code> activates data encryption.</p></section><section>
<h3><a name="Data_Encryption_on_Block_data_transfer."></a>Data Encryption on Block data transfer.</h3>
<p>You need to set <code>dfs.encrypt.data.transfer</code> to <code>true</code> in the hdfs-site.xml in order to activate data encryption for data transfer protocol of DataNode.</p>
<p>Optionally, you may set <code>dfs.encrypt.data.transfer.algorithm</code> to either <code>3des</code> or <code>rc4</code> to choose the specific encryption algorithm. If unspecified, then the configured JCE default on the system is used, which is usually 3DES.</p>
<p>Setting <code>dfs.encrypt.data.transfer.cipher.suites</code> to <code>AES/CTR/NoPadding</code> activates AES encryption. By default, this is unspecified, so AES is not used. When AES is used, the algorithm specified in <code>dfs.encrypt.data.transfer.algorithm</code> is still used during an initial key exchange. The AES key bit length can be configured by setting <code>dfs.encrypt.data.transfer.cipher.key.bitlength</code> to 128, 192 or 256. The default is 128.</p>
<p>AES offers the greatest cryptographic strength and the best performance. At this time, 3DES and RC4 have been used more often in Hadoop clusters.</p>
<p>You can also set <code>dfs.encrypt.data.transfer.cipher.suites</code> to <code>SM4/CTR/NoPadding</code> to activates SM4 encryption. By default, this is unspecified. The SM4 key bit length can be configured by setting <code>dfs.encrypt.data.transfer.cipher.key.bitlength</code> to 128, 192 or 256. The default is 128.</p></section><section>
<h3><a name="Data_Encryption_on_HTTP"></a>Data Encryption on HTTP</h3>
<p>Data transfer between Web-console and clients are protected by using SSL(HTTPS). SSL configuration is recommended but not required to configure Hadoop security with Kerberos.</p>
<p>To enable SSL for web console of HDFS daemons, set <code>dfs.http.policy</code> to either <code>HTTPS_ONLY</code> or <code>HTTP_AND_HTTPS</code> in hdfs-site.xml. Note KMS and HttpFS do not respect this parameter. See <a href="../../hadoop-kms/index.html">Hadoop KMS</a> and <a href="../../hadoop-hdfs-httpfs/ServerSetup.html">Hadoop HDFS over HTTP - Server Setup</a> for instructions on enabling KMS over HTTPS and HttpFS over HTTPS, respectively.</p>
<p>To enable SSL for web console of YARN daemons, set <code>yarn.http.policy</code> to <code>HTTPS_ONLY</code> in yarn-site.xml.</p>
<p>To enable SSL for web console of MapReduce JobHistory server, set <code>mapreduce.jobhistory.http.policy</code> to <code>HTTPS_ONLY</code> in mapred-site.xml.</p></section></section><section>
<h2><a name="Configuration"></a>Configuration</h2><section>
<h3><a name="Permissions_for_both_HDFS_and_local_fileSystem_paths"></a>Permissions for both HDFS and local fileSystem paths</h3>
<p>The following table lists various paths on HDFS and local filesystems (on all nodes) and recommended permissions:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Filesystem </th>
<th align="left"> Path </th>
<th align="left"> User:Group </th>
<th align="left"> Permissions </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> local </td>
<td align="left"> <code>dfs.namenode.name.dir</code> </td>
<td align="left"> hdfs:hadoop </td>
<td align="left"> <code>drwx------</code> </td></tr>
<tr class="a">
<td align="left"> local </td>
<td align="left"> <code>dfs.datanode.data.dir</code> </td>
<td align="left"> hdfs:hadoop </td>
<td align="left"> <code>drwx------</code> </td></tr>
<tr class="b">
<td align="left"> local </td>
<td align="left"> <code>$HADOOP_LOG_DIR</code> </td>
<td align="left"> hdfs:hadoop </td>
<td align="left"> <code>drwxrwxr-x</code> </td></tr>
<tr class="a">
<td align="left"> local </td>
<td align="left"> <code>$YARN_LOG_DIR</code> </td>
<td align="left"> yarn:hadoop </td>
<td align="left"> <code>drwxrwxr-x</code> </td></tr>
<tr class="b">
<td align="left"> local </td>
<td align="left"> <code>yarn.nodemanager.local-dirs</code> </td>
<td align="left"> yarn:hadoop </td>
<td align="left"> <code>drwxr-xr-x</code> </td></tr>
<tr class="a">
<td align="left"> local </td>
<td align="left"> <code>yarn.nodemanager.log-dirs</code> </td>
<td align="left"> yarn:hadoop </td>
<td align="left"> <code>drwxr-xr-x</code> </td></tr>
<tr class="b">
<td align="left"> local </td>
<td align="left"> container-executor </td>
<td align="left"> root:hadoop </td>
<td align="left"> <code>--Sr-s--*</code> </td></tr>
<tr class="a">
<td align="left"> local </td>
<td align="left"> <code>conf/container-executor.cfg</code> </td>
<td align="left"> root:hadoop </td>
<td align="left"> <code>r-------*</code> </td></tr>
<tr class="b">
<td align="left"> hdfs </td>
<td align="left"> <code>/</code> </td>
<td align="left"> hdfs:hadoop </td>
<td align="left"> <code>drwxr-xr-x</code> </td></tr>
<tr class="a">
<td align="left"> hdfs </td>
<td align="left"> <code>/tmp</code> </td>
<td align="left"> hdfs:hadoop </td>
<td align="left"> <code>drwxrwxrwxt</code> </td></tr>
<tr class="b">
<td align="left"> hdfs </td>
<td align="left"> <code>/user</code> </td>
<td align="left"> hdfs:hadoop </td>
<td align="left"> <code>drwxr-xr-x</code> </td></tr>
<tr class="a">
<td align="left"> hdfs </td>
<td align="left"> <code>yarn.nodemanager.remote-app-log-dir</code> </td>
<td align="left"> yarn:hadoop </td>
<td align="left"> <code>drwxrwxrwxt</code> </td></tr>
<tr class="b">
<td align="left"> hdfs </td>
<td align="left"> <code>mapreduce.jobhistory.intermediate-done-dir</code> </td>
<td align="left"> mapred:hadoop </td>
<td align="left"> <code>drwxrwxrwxt</code> </td></tr>
<tr class="a">
<td align="left"> hdfs </td>
<td align="left"> <code>mapreduce.jobhistory.done-dir</code> </td>
<td align="left"> mapred:hadoop </td>
<td align="left"> <code>drwxr-x---</code> </td></tr>
</tbody>
</table></section><section>
<h3><a name="Common_Configurations"></a>Common Configurations</h3>
<p>In order to turn on RPC authentication in hadoop, set the value of <code>hadoop.security.authentication</code> property to <code>&quot;kerberos&quot;</code>, and set security related settings listed below appropriately.</p>
<p>The following properties should be in the <code>core-site.xml</code> of all the nodes in the cluster.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>hadoop.security.authentication</code> </td>
<td align="left"> <code>kerberos</code> </td>
<td align="left"> <code>simple</code> : No authentication. (default) &#xa0;<code>kerberos</code> : Enable authentication by Kerberos. </td></tr>
<tr class="a">
<td align="left"> <code>hadoop.security.authorization</code> </td>
<td align="left"> <code>true</code> </td>
<td align="left"> Enable <a href="./ServiceLevelAuth.html">RPC service-level authorization</a>. </td></tr>
<tr class="b">
<td align="left"> <code>hadoop.rpc.protection</code> </td>
<td align="left"> <code>authentication</code> </td>
<td align="left"> <code>authentication</code> : authentication only (default); <code>integrity</code> : integrity check in addition to authentication;&#xa0;<code>privacy</code> : data encryption in addition to integrity </td></tr>
<tr class="a">
<td align="left"> <code>hadoop.security.auth_to_local</code> </td>
<td align="left"> <code>RULE:</code><i><code>exp1</code></i>&#xa0;<code>RULE:</code><i><code>exp2</code></i>&#xa0;<i>&#x2026;</i>&#xa0;<code>DEFAULT</code> </td>
<td align="left"> The value is string containing new line characters. See <a class="externalLink" href="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos documentation</a> for the format of <i>exp</i>. </td></tr>
<tr class="b">
<td align="left"> <code>hadoop.proxyuser.</code><i>superuser</i><code>.hosts</code> </td>
<td align="left"> </td>
<td align="left"> comma separated hosts from which <i>superuser</i> access are allowed to impersonation. <code>*</code> means wildcard. </td></tr>
<tr class="a">
<td align="left"> <code>hadoop.proxyuser.</code><i>superuser</i><code>.groups</code> </td>
<td align="left"> </td>
<td align="left"> comma separated groups to which users impersonated by <i>superuser</i> belong. <code>*</code> means wildcard. </td></tr>
</tbody>
</table></section><section>
<h3><a name="NameNode"></a>NameNode</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>dfs.block.access.token.enable</code> </td>
<td align="left"> <code>true</code> </td>
<td align="left"> Enable HDFS block access tokens for secure operations. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.namenode.kerberos.principal</code> </td>
<td align="left"> <code>nn/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the NameNode. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.namenode.keytab.file</code> </td>
<td align="left"> <code>/etc/security/keytab/nn.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the NameNode. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.namenode.kerberos.internal.spnego.principal</code> </td>
<td align="left"> <code>HTTP/_HOST@REALM.TLD</code> </td>
<td align="left"> The server principal used by the NameNode for web UI SPNEGO authentication. The SPNEGO server principal begins with the prefix <code>HTTP/</code> by convention. If the value is <code>'*'</code>, the web server will attempt to login with every principal specified in the keytab file <code>dfs.web.authentication.kerberos.keytab</code>. For most deployments this can be set to <code>${dfs.web.authentication.kerberos.principal}</code> i.e use the value of <code>dfs.web.authentication.kerberos.principal</code>. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.web.authentication.kerberos.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/spnego.service.keytab</code> </td>
<td align="left"> SPNEGO keytab file for the NameNode. In HA clusters this setting is shared with the Journal Nodes. </td></tr>
</tbody>
</table>
<p>The following settings allow configuring SSL access to the NameNode web UI (optional).</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>dfs.http.policy</code> </td>
<td align="left"> <code>HTTP_ONLY</code> or <code>HTTPS_ONLY</code> or <code>HTTP_AND_HTTPS</code> </td>
<td align="left"> <code>HTTPS_ONLY</code> turns off http access. If using SASL to authenticate data transfer protocol instead of running DataNode as root and using privileged ports, then this property must be set to <code>HTTPS_ONLY</code> to guarantee authentication of HTTP servers. (See <code>dfs.data.transfer.protection</code>.) </td></tr>
<tr class="a">
<td align="left"> <code>dfs.namenode.https-address</code> </td>
<td align="left"> <code>0.0.0.0:9871</code> </td>
<td align="left"> This parameter is used in non-HA mode and without federation. See <a href="../hadoop-hdfs/HDFSHighAvailabilityWithNFS.html#Deployment">HDFS High Availability</a> and <a href="../hadoop-hdfs/Federation.html#Federation_Configuration">HDFS Federation</a> for details. </td></tr>
</tbody>
</table></section><section>
<h3><a name="Secondary_NameNode"></a>Secondary NameNode</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>dfs.namenode.secondary.http-address</code> </td>
<td align="left"> <code>0.0.0.0:9868</code> </td>
<td align="left"> HTTP web UI address for the Secondary NameNode. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.namenode.secondary.https-address</code> </td>
<td align="left"> <code>0.0.0.0:9869</code> </td>
<td align="left"> HTTPS web UI address for the Secondary NameNode. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.secondary.namenode.keytab.file</code> </td>
<td align="left"> <code>/etc/security/keytab/sn.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the Secondary NameNode. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.secondary.namenode.kerberos.principal</code> </td>
<td align="left"> <code>sn/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the Secondary NameNode. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.secondary.namenode.kerberos.internal.spnego.principal</code> </td>
<td align="left"> <code>HTTP/_HOST@REALM.TLD</code> </td>
<td align="left"> The server principal used by the Secondary NameNode for web UI SPNEGO authentication. The SPNEGO server principal begins with the prefix <code>HTTP/</code> by convention. If the value is <code>'*'</code>, the web server will attempt to login with every principal specified in the keytab file <code>dfs.web.authentication.kerberos.keytab</code>. For most deployments this can be set to <code>${dfs.web.authentication.kerberos.principal}</code> i.e use the value of <code>dfs.web.authentication.kerberos.principal</code>. </td></tr>
</tbody>
</table></section><section>
<h3><a name="JournalNode"></a>JournalNode</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>dfs.journalnode.kerberos.principal</code> </td>
<td align="left"> <code>jn/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the JournalNode. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.journalnode.keytab.file</code> </td>
<td align="left"> <code>/etc/security/keytab/jn.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the JournalNode. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.journalnode.kerberos.internal.spnego.principal</code> </td>
<td align="left"> <code>HTTP/_HOST@REALM.TLD</code> </td>
<td align="left"> The server principal used by the JournalNode for web UI SPNEGO authentication when Kerberos security is enabled. The SPNEGO server principal begins with the prefix <code>HTTP/</code> by convention. If the value is <code>'*'</code>, the web server will attempt to login with every principal specified in the keytab file <code>dfs.web.authentication.kerberos.keytab</code>. For most deployments this can be set to <code>${dfs.web.authentication.kerberos.principal}</code> i.e use the value of <code>dfs.web.authentication.kerberos.principal</code>. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.web.authentication.kerberos.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/spnego.service.keytab</code> </td>
<td align="left"> SPNEGO keytab file for the JournalNode. In HA clusters this setting is shared with the Name Nodes. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.journalnode.https-address</code> </td>
<td align="left"> <code>0.0.0.0:8481</code> </td>
<td align="left"> HTTPS web UI address for the JournalNode. </td></tr>
</tbody>
</table></section><section>
<h3><a name="DataNode"></a>DataNode</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>dfs.datanode.data.dir.perm</code> </td>
<td align="left"> <code>700</code> </td>
<td align="left"> </td></tr>
<tr class="a">
<td align="left"> <code>dfs.datanode.address</code> </td>
<td align="left"> <code>0.0.0.0:1004</code> </td>
<td align="left"> Secure DataNode must use privileged port in order to assure that the server was started securely. This means that the server must be started via jsvc. Alternatively, this must be set to a non-privileged port if using SASL to authenticate data transfer protocol. (See <code>dfs.data.transfer.protection</code>.) </td></tr>
<tr class="b">
<td align="left"> <code>dfs.datanode.http.address</code> </td>
<td align="left"> <code>0.0.0.0:1006</code> </td>
<td align="left"> Secure DataNode must use privileged port in order to assure that the server was started securely. This means that the server must be started via jsvc. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.datanode.https.address</code> </td>
<td align="left"> <code>0.0.0.0:9865</code> </td>
<td align="left"> HTTPS web UI address for the Data Node. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.datanode.kerberos.principal</code> </td>
<td align="left"> <code>dn/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the DataNode. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.datanode.keytab.file</code> </td>
<td align="left"> <code>/etc/security/keytab/dn.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the DataNode. </td></tr>
<tr class="b">
<td align="left"> <code>dfs.encrypt.data.transfer</code> </td>
<td align="left"> <code>false</code> </td>
<td align="left"> set to <code>true</code> when using data encryption </td></tr>
<tr class="a">
<td align="left"> <code>dfs.encrypt.data.transfer.algorithm</code> </td>
<td align="left"> </td>
<td align="left"> optionally set to <code>3des</code> or <code>rc4</code> when using data encryption to control encryption algorithm </td></tr>
<tr class="b">
<td align="left"> <code>dfs.encrypt.data.transfer.cipher.suites</code> </td>
<td align="left"> </td>
<td align="left"> optionally set to <code>AES/CTR/NoPadding</code> to activate AES encryption when using data encryption </td></tr>
<tr class="a">
<td align="left"> <code>dfs.encrypt.data.transfer.cipher.key.bitlength</code> </td>
<td align="left"> </td>
<td align="left"> optionally set to <code>128</code>, <code>192</code> or <code>256</code> to control key bit length when using AES with data encryption </td></tr>
<tr class="b">
<td align="left"> <code>dfs.data.transfer.protection</code> </td>
<td align="left"> </td>
<td align="left"> <code>authentication</code> : authentication only; <code>integrity</code> : integrity check in addition to authentication; <code>privacy</code> : data encryption in addition to integrity This property is unspecified by default. Setting this property enables SASL for authentication of data transfer protocol. If this is enabled, then <code>dfs.datanode.address</code> must use a non-privileged port, <code>dfs.http.policy</code> must be set to <code>HTTPS_ONLY</code> and the <code>HDFS_DATANODE_SECURE_USER</code> environment variable must be undefined when starting the DataNode process. </td></tr>
</tbody>
</table></section><section>
<h3><a name="WebHDFS"></a>WebHDFS</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>dfs.web.authentication.kerberos.principal</code> </td>
<td align="left"> <code>http/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the WebHDFS. In HA clusters this setting is commonly used by the JournalNodes for securing access to the JournalNode HTTP server with SPNEGO. </td></tr>
<tr class="a">
<td align="left"> <code>dfs.web.authentication.kerberos.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/http.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for WebHDFS. In HA clusters this setting is commonly used the JournalNodes for securing access to the JournalNode HTTP server with SPNEGO. </td></tr>
</tbody>
</table></section><section>
<h3><a name="ResourceManager"></a>ResourceManager</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.principal</code> </td>
<td align="left"> <code>rm/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the ResourceManager. </td></tr>
<tr class="a">
<td align="left"> <code>yarn.resourcemanager.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/rm.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the ResourceManager. </td></tr>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.webapp.https.address</code> </td>
<td align="left"> <code>${yarn.resourcemanager.hostname}:8090</code> </td>
<td align="left"> The https adddress of the RM web application for non-HA. In HA clusters, use <code>yarn.resourcemanager.webapp.https.address.</code><i>rm-id</i> for each ResourceManager. See <a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html#Configurations">ResourceManager High Availability</a> for details. </td></tr>
</tbody>
</table></section><section>
<h3><a name="NodeManager"></a>NodeManager</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.nodemanager.principal</code> </td>
<td align="left"> <code>nm/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the NodeManager. </td></tr>
<tr class="a">
<td align="left"> <code>yarn.nodemanager.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/nm.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the NodeManager. </td></tr>
<tr class="b">
<td align="left"> <code>yarn.nodemanager.container-executor.class</code> </td>
<td align="left"> <code>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</code> </td>
<td align="left"> Use LinuxContainerExecutor. </td></tr>
<tr class="a">
<td align="left"> <code>yarn.nodemanager.linux-container-executor.group</code> </td>
<td align="left"> <code>hadoop</code> </td>
<td align="left"> Unix group of the NodeManager. </td></tr>
<tr class="b">
<td align="left"> <code>yarn.nodemanager.linux-container-executor.path</code> </td>
<td align="left"> <code>/path/to/bin/container-executor</code> </td>
<td align="left"> The path to the executable of Linux container executor. </td></tr>
<tr class="a">
<td align="left"> <code>yarn.nodemanager.webapp.https.address</code> </td>
<td align="left"> <code>0.0.0.0:8044</code> </td>
<td align="left"> The https adddress of the NM web application. </td></tr>
</tbody>
</table></section><section>
<h3><a name="Configuration_for_WebAppProxy"></a>Configuration for WebAppProxy</h3>
<p>The <code>WebAppProxy</code> provides a proxy between the web applications exported by an application and an end user. If security is enabled it will warn users before accessing a potentially unsafe web application. Authentication and authorization using the proxy is handled just like any other privileged web application.</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.web-proxy.address</code> </td>
<td align="left"> <code>WebAppProxy</code> host:port for proxy to AM web apps. </td>
<td align="left"> <code>host:port</code> if this is the same as <code>yarn.resourcemanager.webapp.address</code> or it is not defined then the <code>ResourceManager</code> will run the proxy otherwise a standalone proxy server will need to be launched. </td></tr>
<tr class="a">
<td align="left"> <code>yarn.web-proxy.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/web-app.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the WebAppProxy. </td></tr>
<tr class="b">
<td align="left"> <code>yarn.web-proxy.principal</code> </td>
<td align="left"> <code>wap/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the WebAppProxy. </td></tr>
</tbody>
</table></section><section>
<h3><a name="LinuxContainerExecutor"></a>LinuxContainerExecutor</h3>
<p>A <code>ContainerExecutor</code> used by YARN framework which define how any <i>container</i> launched and controlled.</p>
<p>The following are the available in Hadoop YARN:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> ContainerExecutor </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>DefaultContainerExecutor</code> </td>
<td align="left"> The default executor which YARN uses to manage container execution. The container process has the same Unix user as the NodeManager. </td></tr>
<tr class="a">
<td align="left"> <code>LinuxContainerExecutor</code> </td>
<td align="left"> Supported only on GNU/Linux, this executor runs the containers as either the YARN user who submitted the application (when full security is enabled) or as a dedicated user (defaults to nobody) when full security is not enabled. When full security is enabled, this executor requires all user accounts to be created on the cluster nodes where the containers are launched. It uses a <code>setuid</code> executable that is included in the Hadoop distribution. The NodeManager uses this executable to launch and kill containers. The setuid executable switches to the user who has submitted the application and launches or kills the containers. For maximum security, this executor sets up restricted permissions and user/group ownership of local files and directories used by the containers such as the shared objects, jars, intermediate files, log files etc. Particularly note that, because of this, except the application owner and NodeManager, no other user can access any of the local files/directories including those localized as part of the distributed cache. </td></tr>
</tbody>
</table>
<p>To build the LinuxContainerExecutor executable run:</p>
<div class="source">
<div class="source">
<pre> $ mvn package -Dcontainer-executor.conf.dir=/etc/hadoop/
</pre></div></div>
<p>The path passed in <code>-Dcontainer-executor.conf.dir</code> should be the path on the cluster nodes where a configuration file for the setuid executable should be located. The executable should be installed in <code>$HADOOP_YARN_HOME/bin</code>.</p>
<p>The executable must have specific permissions: 6050 or <code>--Sr-s---</code> permissions user-owned by <code>root</code> (super-user) and group-owned by a special group (e.g. <code>hadoop</code>) of which the NodeManager Unix user is the group member and no ordinary application user is. If any application user belongs to this special group, security will be compromised. This special group name should be specified for the configuration property <code>yarn.nodemanager.linux-container-executor.group</code> in both <code>conf/yarn-site.xml</code> and <code>conf/container-executor.cfg</code>.</p>
<p>For example, let&#x2019;s say that the NodeManager is run as user <code>yarn</code> who is part of the groups <code>users</code> and <code>hadoop</code>, any of them being the primary group. Let also be that <code>users</code> has both <code>yarn</code> and another user (application submitter) <code>alice</code> as its members, and <code>alice</code> does not belong to <code>hadoop</code>. Going by the above description, the setuid/setgid executable should be set 6050 or <code>--Sr-s---</code> with user-owner as <code>yarn</code> and group-owner as <code>hadoop</code> which has <code>yarn</code> as its member (and not <code>users</code> which has <code>alice</code> also as its member besides <code>yarn</code>).</p>
<p>The LinuxTaskController requires that paths including and leading up to the directories specified in <code>yarn.nodemanager.local-dirs</code> and <code>yarn.nodemanager.log-dirs</code> to be set 755 permissions as described above in the table on permissions on directories.</p>
<ul>
<li><code>conf/container-executor.cfg</code></li>
</ul>
<p>The executable requires a configuration file called <code>container-executor.cfg</code> to be present in the configuration directory passed to the mvn target mentioned above.</p>
<p>The configuration file must be owned by the user running NodeManager (user <code>yarn</code> in the above example), group-owned by anyone and should have the permissions 0400 or <code>r--------</code> .</p>
<p>The executable requires following configuration items to be present in the <code>conf/container-executor.cfg</code> file. The items should be mentioned as simple key=value pairs, one per-line:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.nodemanager.linux-container-executor.group</code> </td>
<td align="left"> <code>hadoop</code> </td>
<td align="left"> Unix group of the NodeManager. The group owner of the <code>container-executor</code> binary should be this group. Should be same as the value with which the NodeManager is configured. This configuration is required for validating the secure access of the <code>container-executor</code> binary. </td></tr>
<tr class="a">
<td align="left"> <code>banned.users</code> </td>
<td align="left"> <code>hdfs,yarn,mapred,bin</code> </td>
<td align="left"> Banned users. </td></tr>
<tr class="b">
<td align="left"> <code>allowed.system.users</code> </td>
<td align="left"> <code>foo,bar</code> </td>
<td align="left"> Allowed system users. </td></tr>
<tr class="a">
<td align="left"> <code>min.user.id</code> </td>
<td align="left"> <code>1000</code> </td>
<td align="left"> Prevent other super-users. </td></tr>
</tbody>
</table>
<p>To re-cap, here are the local file-sysytem permissions required for the various paths related to the <code>LinuxContainerExecutor</code>:</p>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Filesystem </th>
<th align="left"> Path </th>
<th align="left"> User:Group </th>
<th align="left"> Permissions </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> local </td>
<td align="left"> <code>container-executor</code> </td>
<td align="left"> root:hadoop </td>
<td align="left"> <code>--Sr-s--*</code> </td></tr>
<tr class="a">
<td align="left"> local </td>
<td align="left"> <code>conf/container-executor.cfg</code> </td>
<td align="left"> root:hadoop </td>
<td align="left"> <code>r-------*</code> </td></tr>
<tr class="b">
<td align="left"> local </td>
<td align="left"> <code>yarn.nodemanager.local-dirs</code> </td>
<td align="left"> yarn:hadoop </td>
<td align="left"> <code>drwxr-xr-x</code> </td></tr>
<tr class="a">
<td align="left"> local </td>
<td align="left"> <code>yarn.nodemanager.log-dirs</code> </td>
<td align="left"> yarn:hadoop </td>
<td align="left"> <code>drwxr-xr-x</code> </td></tr>
</tbody>
</table></section><section>
<h3><a name="MapReduce_JobHistory_Server"></a>MapReduce JobHistory Server</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Parameter </th>
<th align="left"> Value </th>
<th align="left"> Notes </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>mapreduce.jobhistory.address</code> </td>
<td align="left"> MapReduce JobHistory Server <code>host:port</code> </td>
<td align="left"> Default port is 10020. </td></tr>
<tr class="a">
<td align="left"> <code>mapreduce.jobhistory.keytab</code> </td>
<td align="left"> <code>/etc/security/keytab/jhs.service.keytab</code> </td>
<td align="left"> Kerberos keytab file for the MapReduce JobHistory Server. </td></tr>
<tr class="b">
<td align="left"> <code>mapreduce.jobhistory.principal</code> </td>
<td align="left"> <code>jhs/_HOST@REALM.TLD</code> </td>
<td align="left"> Kerberos principal name for the MapReduce JobHistory Server. </td></tr>
</tbody>
</table></section></section><section>
<h2><a name="Multihoming"></a>Multihoming</h2>
<p>Multihomed setups where each host has multiple hostnames in DNS (e.g. different hostnames corresponding to public and private network interfaces) may require additional configuration to get Kerberos authentication working. See <a href="../hadoop-hdfs/HdfsMultihoming.html">HDFS Support for Multihomed Networks</a></p></section><section>
<h2><a name="Troubleshooting"></a>Troubleshooting</h2>
<p>Kerberos is hard to set up &#x2014;and harder to debug. Common problems are</p>
<ol style="list-style-type: decimal">
<li>Network and DNS configuration.</li>
<li>Kerberos configuration on hosts (<code>/etc/krb5.conf</code>).</li>
<li>Keytab creation and maintenance.</li>
<li>Environment setup: JVM, user login, system clocks, etc.</li>
</ol>
<p>The fact that the error messages from the JVM are essentially meaningless does not aid in diagnosing and fixing such problems.</p>
<p>Extra debugging information can be enabled for the client and for any service</p>
<p>Set the environment variable <code>HADOOP_JAAS_DEBUG</code> to <code>true</code>.</p>
<div class="source">
<div class="source">
<pre>export HADOOP_JAAS_DEBUG=true
</pre></div></div>
<p>Edit the <code>log4j.properties</code> file to log Hadoop&#x2019;s security package at <code>DEBUG</code> level.</p>
<div class="source">
<div class="source">
<pre>log4j.logger.org.apache.hadoop.security=DEBUG
</pre></div></div>
<p>Enable JVM-level debugging by setting some system properties.</p>
<div class="source">
<div class="source">
<pre>export HADOOP_OPTS=&quot;-Djava.net.preferIPv4Stack=true -Dsun.security.krb5.debug=true -Dsun.security.spnego.debug&quot;
</pre></div></div>
</section><section>
<h2><a name="Troubleshooting_with_KDiag"></a>Troubleshooting with <code>KDiag</code></h2>
<p>Hadoop has a tool to aid validating setup: <code>KDiag</code></p>
<p>It contains a series of probes for the JVM&#x2019;s configuration and the environment, dumps out some system files (<code>/etc/krb5.conf</code>, <code>/etc/ntp.conf</code>), prints out some system state and then attempts to log in to Kerberos as the current user, or a specific principal in a named keytab.</p>
<p>The output of the command can be used for local diagnostics, or forwarded to whoever supports the cluster.</p>
<p>The <code>KDiag</code> command has its own entry point; It is invoked by passing <code>kdiag</code> to <code>bin/hadoop</code> command. Accordingly, it will display the kerberos client state of the command used to invoke it.</p>
<div class="source">
<div class="source">
<pre>hadoop kdiag
</pre></div></div>
<p>The command returns a status code of 0 for a successful diagnostics run. This does not imply that Kerberos is working &#x2014;merely that the KDiag command did not identify any problem from its limited set of probes. In particular, as it does not attempt to connect to any remote service, it does not verify that the client is trusted by any service.</p>
<p>If unsuccessful, exit codes are</p>
<ul>
<li>-1: the command failed for an unknown reason</li>
<li>41: Unauthorized (== HTTP&#x2019;s 401). KDiag detected a condition which causes Kerberos to not work. Examine the output to identify the issue.</li>
</ul><section>
<h3><a name="Usage"></a>Usage</h3>
<div class="source">
<div class="source">
<pre>KDiag: Diagnose Kerberos Problems
[-D key=value] : Define a configuration option.
[--jaas] : Require a JAAS file to be defined in java.security.auth.login.config.
[--keylen &lt;keylen&gt;] : Require a minimum size for encryption keys supported by the JVM. Default value : 256.
[--keytab &lt;keytab&gt; --principal &lt;principal&gt;] : Login from a keytab as a specific principal.
[--nofail] : Do not fail on the first problem.
[--nologin] : Do not attempt to log in.
[--out &lt;file&gt;] : Write output to a file.
[--resource &lt;resource&gt;] : Load an XML configuration resource.
[--secure] : Require the hadoop configuration to be secure.
[--verifyshortname &lt;principal&gt;]: Verify the short name of the specific principal does not contain '@' or '/'
</pre></div></div>
<section>
<h4><a name="a--jaas:_Require_a_JAAS_file_to_be_defined_in_java.security.auth.login.config."></a><code>--jaas</code>: Require a JAAS file to be defined in <code>java.security.auth.login.config</code>.</h4>
<p>If <code>--jaas</code> is set, the Java system property <code>java.security.auth.login.config</code> must be set to a JAAS file; this file must exist, be a simple file of non-zero bytes, and readable by the current user. More detailed validation is not performed.</p>
<p>JAAS files are not needed by Hadoop itself, but some services (such as Zookeeper) do require them for secure operation.</p></section><section>
<h4><a name="a--keylen_.3Clength.3E:_Require_a_minimum_size_for_encryption_keys_supported_by_the_JVM.22."></a><code>--keylen &lt;length&gt;</code>: Require a minimum size for encryption keys supported by the JVM&quot;.</h4>
<p>If the JVM does not support this length, the command will fail.</p>
<p>The default value is to 256, as needed for the <code>AES256</code> encryption scheme. A JVM without the Java Cryptography Extensions installed does not support such a key length. Kerberos will not work unless configured to use an encryption scheme with a shorter key length.</p></section><section>
<h4><a name="a--keytab_.3Ckeytab.3E_--principal_.3Cprincipal.3E:_Log_in_from_a_keytab."></a><code>--keytab &lt;keytab&gt; --principal &lt;principal&gt;</code>: Log in from a keytab.</h4>
<p>Log in from a keytab as the specific principal.</p>
<ol style="list-style-type: decimal">
<li>The file must contain the specific principal, including any named host. That is, there is no mapping from <code>_HOST</code> to the current hostname.</li>
<li>KDiag will log out and attempt to log back in again. This catches JVM compatibility problems which have existed in the past. (Hadoop&#x2019;s Kerberos support requires use of/introspection into JVM-specific classes).</li>
</ol></section><section>
<h4><a name="a--nofail_:_Do_not_fail_on_the_first_problem"></a><code>--nofail</code> : Do not fail on the first problem</h4>
<p>KDiag will make a best-effort attempt to diagnose all Kerberos problems, rather than stop at the first one.</p>
<p>This is somewhat limited; checks are made in the order which problems surface (e.g keylength is checked first), so an early failure can trigger many more problems. But it does produce a more detailed report.</p></section><section>
<h4><a name="a--nologin:_Do_not_attempt_to_log_in."></a><code>--nologin</code>: Do not attempt to log in.</h4>
<p>Skip trying to log in. This takes precedence over the <code>--keytab</code> option, and also disables trying to log in to kerberos as the current kinited user.</p>
<p>This is useful when the KDiag command is being invoked within an application, as it does not set up Hadoop&#x2019;s static security state &#x2014;merely check for some basic Kerberos preconditions.</p></section><section>
<h4><a name="a--out_outfile:_Write_output_to_file."></a><code>--out outfile</code>: Write output to file.</h4>
<div class="source">
<div class="source">
<pre>hadoop kdiag --out out.txt
</pre></div></div>
<p>Much of the diagnostics information comes from the JRE (to <code>stderr</code>) and from Log4j (to <code>stdout</code>). To get all the output, it is best to redirect both these output streams to the same file, and omit the <code>--out</code> option.</p>
<div class="source">
<div class="source">
<pre>hadoop kdiag --keytab zk.service.keytab --principal zookeeper/devix.example.org@REALM &gt; out.txt 2&gt;&amp;1
</pre></div></div>
<p>Even there, the output of the two streams, emitted across multiple threads, can be a bit confusing. It will get easier with practise. Looking at the thread name in the Log4j output to distinguish background threads from the main thread helps at the hadoop level, but doesn&#x2019;t assist in JVM-level logging.</p></section><section>
<h4><a name="a--resource_.3Cresource.3E_:_XML_configuration_resource_to_load."></a><code>--resource &lt;resource&gt;</code> : XML configuration resource to load.</h4>
<p>To load XML configuration files, this option can be used. As by default, the <code>core-default</code> and <code>core-site</code> XML resources are only loaded. This will help, when additional configuration files has any Kerberos related configurations.</p>
<div class="source">
<div class="source">
<pre>hadoop kdiag --resource hbase-default.xml --resource hbase-site.xml
</pre></div></div>
<p>For extra logging during the operation, set the logging and <code>HADOOP_JAAS_DEBUG</code> environment variable to the values listed in &#x201c;Troubleshooting&#x201d;. The JVM options are automatically set in KDiag.</p></section><section>
<h4><a name="a--secure:_Fail_if_the_command_is_not_executed_on_a_secure_cluster."></a><code>--secure</code>: Fail if the command is not executed on a secure cluster.</h4>
<p>That is: if the authentication mechanism of the cluster is explicitly or implicitly set to &#x201c;simple&#x201d;:</p>
<div class="source">
<div class="source">
<pre>&lt;property&gt;
&lt;name&gt;hadoop.security.authentication&lt;/name&gt;
&lt;value&gt;simple&lt;/value&gt;
&lt;/property&gt;
</pre></div></div>
<p>Needless to say, an application so configured cannot talk to a secure Hadoop cluster.</p></section><section>
<h4><a name="a--verifyshortname_.3Cprincipal.3E:_validate_the_short_name_of_a_principal"></a><code>--verifyshortname &lt;principal&gt;</code>: validate the short name of a principal</h4>
<p>This verifies that the short name of a principal contains neither the <code>&quot;@&quot;</code> nor <code>&quot;/&quot;</code> characters.</p></section></section><section>
<h3><a name="Example"></a>Example</h3>
<div class="source">
<div class="source">
<pre>hadoop kdiag \
--nofail \
--resource hdfs-site.xml --resource yarn-site.xml \
--keylen 1024 \
--keytab zk.service.keytab --principal zookeeper/devix.example.org@REALM
</pre></div></div>
<p>This attempts to perform all diagnostics without failing early, load in the HDFS and YARN XML resources, require a minimum key length of 1024 bytes, and log in as the principal <code>zookeeper/devix.example.org@REALM</code>, whose key must be in the keytab <code>zk.service.keytab</code></p></section></section><section>
<h2><a name="References"></a>References</h2>
<ol style="list-style-type: decimal">
<li>O&#x2019;Malley O et al. <a class="externalLink" href="https://issues.apache.org/jira/secure/attachment/12428537/security-design.pdf">Hadoop Security Design</a></li>
<li>O&#x2019;Malley O, <a class="externalLink" href="http://www.slideshare.net/oom65/hadoop-security-architecture">Hadoop Security Architecture</a></li>
<li><a class="externalLink" href="http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html">Troubleshooting Kerberos on Java 7</a></li>
<li><a class="externalLink" href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html">Troubleshooting Kerberos on Java 8</a></li>
<li><a class="externalLink" href="http://docs.oracle.com/javase/7/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html">Java 7 Kerberos Requirements</a></li>
<li><a class="externalLink" href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/Troubleshooting.html">Java 8 Kerberos Requirements</a></li>
<li>Loughran S., <a class="externalLink" href="https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/">Hadoop and Kerberos: The Madness beyond the Gate</a></li>
</ol></section>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2023
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>