hadoop/hadoop-project-dist/hadoop-common/GroupsMapping.html

759 lines
48 KiB
HTML

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2023-02-24
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.4.0-SNAPSHOT &#x2013; Hadoop Groups Mapping</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20230224" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2023-02-24
&nbsp;| Version: 3.4.0-SNAPSHOT
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
<li class="none">
<a href="../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Hadoop Groups Mapping</h1>
<ul>
<li><a href="#Overview">Overview</a></li>
<li><a href="#Static_Mapping">Static Mapping</a></li>
<li><a href="#Caching.2FNegative_caching">Caching/Negative caching</a></li>
<li><a href="#LDAP_Groups_Mapping">LDAP Groups Mapping</a>
<ul>
<li><a href="#Bind_user.28s.29">Bind user(s)</a></li></ul></li>
<li><a href="#Multiple_bind_users">Multiple bind users</a>
<ul>
<li><a href="#Active_Directory">Active Directory</a></li>
<li><a href="#POSIX_Groups">POSIX Groups</a></li>
<li><a href="#SSL">SSL</a></li>
<li><a href="#Low_latency_group_mapping_resolution">Low latency group mapping resolution</a></li>
<li><a href="#Configuring_retries_and_multiple_LDAP_servers_with_failover">Configuring retries and multiple LDAP servers with failover</a></li></ul></li>
<li><a href="#Composite_Groups_Mapping">Composite Groups Mapping</a>
<ul>
<li><a href="#Multiple_group_mapping_providers_configuration_sample">Multiple group mapping providers configuration sample</a></li></ul></li></ul>
<section>
<h2><a name="Overview"></a>Overview</h2>
<p>The groups of a user is determined by a group mapping service provider. Hadoop supports various group mapping mechanisms, configured by the <code>hadoop.security.group.mapping</code> property. Some of them, such as <code>JniBasedUnixGroupsMappingWithFallback</code>, use operating systems&#x2019; group name resolution and requires no configuration. But Hadoop also supports special group mapping mechanisms through LDAP and composition of LDAP and operating system group name resolution, which require additional configurations. <code>hadoop.security.group.mapping</code> can be one of the following:</p>
<ul>
<li>
<p><b>org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback</b></p>
<p>The default implementation. It will determine if the Java Native Interface (JNI) is available. If JNI is available, the implementation will use the API within hadoop to resolve a list of groups for a user. If JNI is not available then the shell-based implementation, <code>ShellBasedUnixGroupsMapping</code>, is used.</p>
</li>
<li>
<p><b>org.apache.hadoop.security.JniBasedUnixGroupsNetgroupMappingWithFallback</b></p>
<p>Similar to <code>JniBasedUnixGroupsMappingWithFallback</code>. If JNI is available, it obtains netgroup membership using the Hadoop native API; otherwise uses <code>ShellBasedUnixGroupsNetgroupMapping</code>.</p>
</li>
<li>
<p><b>org.apache.hadoop.security.ShellBasedUnixGroupsMapping</b></p>
<p>This implementation shells out with the <code>bash -c groups</code> command (for a Linux/Unix environment) or the <code>net group</code> command (for a Windows environment) to resolve a list of groups for a user.</p>
</li>
<li>
<p><b>org.apache.hadoop.security.ShellBasedUnixGroupsNetgroupMapping</b></p>
<p>This implementation is similar to <code>ShellBasedUnixGroupsMapping</code>, except that it executes <code>getent netgroup</code> command to get netgroup membership.</p>
</li>
<li>
<p><b>org.apache.hadoop.security.LdapGroupsMapping</b></p>
<p>An alternate implementation, which connects directly to an LDAP server to resolve the list of groups. However, this provider should only be used if the required groups reside exclusively in LDAP, and are not materialized on the Unix servers. LdapGroupsMapping supports SSL connection and POSIX group semantics. See section <a href="#LDAP_Groups_Mapping">LDAP Groups Mapping</a> for details.</p>
</li>
<li>
<p><b>org.apache.hadoop.security.CompositeGroupsMapping</b></p>
<p>This implementation composites other group mapping providers for determining group membership. This allows to combine existing provider implementations and composite a virtually new provider without customized development to deal with complex situation. See section <a href="#Composite_Groups_Mapping">Composite Groups Mapping</a> for details.</p>
</li>
</ul>
<p>For HDFS, the mapping of users to groups is performed on the NameNode. Thus, the host system configuration of the NameNode determines the group mappings for the users.</p>
<p>Note that HDFS stores the user and group of a file or directory as strings; there is no conversion from user and group identity numbers as is conventional in Unix.</p></section><section>
<h2><a name="Static_Mapping"></a>Static Mapping</h2>
<p>It is possible to statically map users to groups by defining the mapping in <code>hadoop.user.group.static.mapping.overrides</code> in the format <code>user1=group1,group2;user2=;user3=group2</code>. This property overrides any group mapping service provider. If a user&#x2019;s groups are defined in it, the groups are returned without more lookups; otherwise, the service provider defined in <code>hadoop.security.group.mapping</code> is used to look up the groups. By default, <code>dr.who=;</code> is defined, so the fake user dr.who will not have any groups.</p></section><section>
<h2><a name="Caching.2FNegative_caching"></a>Caching/Negative caching</h2>
<p>Since the group mapping resolution relies on external mechanisms, the NameNode performance may be impacted. To reduce the impact due to repeated lookups, Hadoop caches the groups returned by the service provider. The cache invalidate is configurable via <code>hadoop.security.groups.cache.secs</code>, and the default is 300 seconds.</p>
<p>With the default caching implementation, after <code>hadoop.security.groups.cache.secs</code> when the cache entry expires, the next thread to request group membership will query the group mapping service provider to lookup the current groups for the user. While this lookup is running, the thread that initiated it will block, while any other threads requesting groups for the same user will retrieve the previously cached values. If the refresh fails, the thread performing the refresh will throw an exception and the process will repeat for the next thread that requests a lookup for that value. If the lookup repeatedly fails, and the cache is not updated, after <code>hadoop.security.groups.cache.secs * 10</code> seconds the cached entry will be evicted and all threads will block until a successful reload is performed.</p>
<p>To avoid any threads blocking when the cached entry expires, set <code>hadoop.security.groups.cache.background.reload</code> to true. This enables a small thread pool of <code>hadoop.security.groups.cache.background.reload.threads</code> threads having 3 threads by default. With this setting, when the cache is queried for an expired entry, the expired result is returned immediately and a task is queued to refresh the cache in the background. If the background refresh fails a new refresh operation will be queued by the next request to the cache, until <code>hadoop.security.groups.cache.secs * 10</code> when the cached entry will be evicted and all threads will block for that user until a successful reload occurs.</p>
<p>To avoid spamming NameNode with unknown users, Hadoop employs negative caching so that if the result of the lookup is empty, return an empty group directly instead of performing more group mapping queries, The cache invalidation is configurable via <code>hadoop.security.groups.negative-cache.secs</code>. The default is 30 seconds, so if group mapping service providers returns no group for a user, no lookup will be performed for the same user within 30 seconds.</p></section><section>
<h2><a name="LDAP_Groups_Mapping"></a>LDAP Groups Mapping</h2>
<p>This provider supports LDAP with simple password authentication using JNDI API. <code>hadoop.security.group.mapping.ldap.url</code> must be set. This refers to the URL of the LDAP server(s) for resolving user groups. It supports configuring multiple LDAP servers via a comma-separated list.</p>
<p><code>hadoop.security.group.mapping.ldap.base</code> configures the search base for the LDAP connection. This is a distinguished name, and will typically be the root of the LDAP directory. Get groups for a given username first looks up the user and then looks up the groups for the user result. If the directory setup has different user and group search bases, use <code>hadoop.security.group.mapping.ldap.userbase</code> and <code>hadoop.security.group.mapping.ldap.groupbase</code> configs.</p>
<p>It is possible to set a maximum time limit when searching and awaiting a result. Set <code>hadoop.security.group.mapping.ldap.directory.search.timeout</code> to 0 if infinite wait period is desired. Default is 10,000 milliseconds (10 seconds). This is the limit for each ldap query. If <code>hadoop.security.group.mapping.ldap.search.group.hierarchy.levels</code> is set to a positive value, then the total latency will be bounded by max(Recur Depth in LDAP, <code>hadoop.security.group.mapping.ldap.search.group.hierarchy.levels</code> ) * <code>hadoop.security.group.mapping.ldap.directory.search.timeout</code>.</p>
<p><code>hadoop.security.group.mapping.ldap.base</code> configures how far to walk up the groups hierarchy when resolving groups. By default, with a limit of 0, in order to be considered a member of a group, the user must be an explicit member in LDAP. Otherwise, it will traverse the group hierarchy <code>hadoop.security.group.mapping.ldap.search.group.hierarchy.levels</code> levels up.</p>
<p>It is possible to have custom group search filters with different arguments using the configuration <code>hadoop.security.group.mapping.ldap.group.search.filter.pattern</code>, we can configure comma separated values here and the values configured will be fetched from the LDAP attributes and will be replaced in the group search filter in the order they appear here, say if the first entry here is uid, so uid will be fetched from the attributes and the value fetched will be used in place of {0} in the group search filter, similarly the second value configured will replace {1} and so on.</p>
<p>Note: If <code>hadoop.security.group.mapping.ldap.group.search.filter.pattern</code> is configured, the group search will always be done assuming this group search filter pattern irrespective of any other parameters.</p><section>
<h3><a name="Bind_user.28s.29"></a>Bind user(s)</h3>
<p>If the LDAP server does not support anonymous binds, set the distinguished name of the user to bind in <code>hadoop.security.group.mapping.ldap.bind.user</code>. The path to the file containing the bind user&#x2019;s password is specified in <code>hadoop.security.group.mapping.ldap.bind.password.file</code>. This file should be readable only by the Unix user running the daemons.</p></section></section><section>
<h2><a name="Multiple_bind_users"></a>Multiple bind users</h2>
<p>If multiple bind users are required, they can be specified through <code>hadoop.security.group.mapping.ldap.bind.users</code>. These will represent the aliases of users to be used to bind as when connecting to the LDAP. Each alias will then have to have its distinguished name and password configured. This is useful if the bind user&#x2019;s password has to be reset. If AuthenticationException is encountered when connecting to LDAP, LDAPGroupsMapping will switch to the next bind user information and cycle back if necessary.</p>
<p>For example, if: <code>hadoop.security.group.mapping.ldap.bind.users=alias1,alias2</code> , then the following configuration is valid: <code>hadoop.security.group.mapping.ldap.bind.users.alias1.bind.user=bindUser1</code> <code>hadoop.security.group.mapping.ldap.bind.users.alias1.bind.password.alias=bindPasswordAlias1</code> <code>hadoop.security.group.mapping.ldap.bind.users.alias2.bind.user=bindUser2</code> <code>hadoop.security.group.mapping.ldap.bind.users.alias2.bind.password.alias=bindPasswordAlias2</code></p><section>
<h3><a name="Active_Directory"></a>Active Directory</h3>
<p>The default configuration supports LDAP group name resolution with an Active Directory server.</p></section><section>
<h3><a name="POSIX_Groups"></a>POSIX Groups</h3>
<p>If the LDAP server supports POSIX group semantics (RFC-2307), Hadoop can perform LDAP group resolution queries to the server by setting both <code>hadoop.security.group.mapping.ldap.search.filter.user</code> to <code>(&amp;amp;(objectClass=posixAccount)(uid={0}))</code> and <code>hadoop.security.group.mapping.ldap.search.filter.group</code> to <code>(objectClass=posixGroup)</code>.</p></section><section>
<h3><a name="SSL"></a>SSL</h3>
<p>To secure the connection, the implementation supports LDAP over SSL (LDAPS). SSL is enable by setting <code>hadoop.security.group.mapping.ldap.ssl</code> to <code>true</code>. In addition, specify the path to the keystore file for SSL connection in <code>hadoop.security.group.mapping.ldap.ssl.keystore</code> and keystore password in <code>hadoop.security.group.mapping.ldap.ssl.keystore.password</code>, at the same time, make sure <code>hadoop.security.credential.clear-text-fallback</code> is true. Alternatively, store the keystore password in a file, and point <code>hadoop.security.group.mapping.ldap.ssl.keystore.password.file</code> to that file. For security purposes, this file should be readable only by the Unix user running the daemons, and for preventing recursive dependency, this file should be a local file. The first approach aka using <code>hadoop.security.group.mapping.ldap.ssl.keystore.password</code> is highly discouraged because it exposes the password in the configuration file.</p></section><section>
<h3><a name="Low_latency_group_mapping_resolution"></a>Low latency group mapping resolution</h3>
<p>Typically, Hadoop resolves a user&#x2019;s group names by making two LDAP queries: the first query gets the user object, and the second query uses the user&#x2019;s Distinguished Name to find the groups. For some LDAP servers, such as Active Directory, the user object returned in the first query also contains the DN of the user&#x2019;s groups in its <code>memberOf</code> attribute, and the name of a group is its Relative Distinguished Name. Therefore, it is possible to infer the user&#x2019;s groups from the first query without sending the second one, and it may reduce group name resolution latency incurred by the second query. If it fails to get group names, it will fall back to the typical two-query scenario and send the second query to get group names. To enable this feature, set <code>hadoop.security.group.mapping.ldap.search.attr.memberof</code> to <code>memberOf</code>, and Hadoop will resolve group names using this attribute in the user object.</p>
<p>If the LDAP server&#x2019;s certificate is not signed by a well known certificate authority, specify the path to the truststore in <code>hadoop.security.group.mapping.ldap.ssl.truststore</code>. Similar to keystore, specify the truststore password file in <code>hadoop.security.group.mapping.ldap.ssl.truststore.password.file</code>.</p></section><section>
<h3><a name="Configuring_retries_and_multiple_LDAP_servers_with_failover"></a>Configuring retries and multiple LDAP servers with failover</h3>
<p>If there are issues encountered when retrieving information from LDAP servers, the request will be retried. To configure the number of retries, use the following configuration:</p>
<div class="source">
<div class="source">
<pre> &lt;name&gt;hadoop.security.group.mapping.ldap.num.attempts&lt;/name&gt;
&lt;value&gt;3&lt;/value&gt;
&lt;description&gt;
This property is the number of attempts to be made for LDAP operations.
If this limit is exceeded, LdapGroupsMapping will return an empty
group list.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
<p>LDAP Groups Mapping also supports configuring multiple LDAP servers and failover if a particular instance is not available or is misbehaving. The following configuration shows configuring 3 LDAP servers. Additionally, 2 attempts will be made for each server before failing over to the next one, with 6 attempts overall before failing.</p>
<div class="source">
<div class="source">
<pre>&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.ldap.url&lt;/name&gt;
&lt;value&gt;ldap://server1,ldap://server2,ldap://server3&lt;/value&gt;
&lt;description&gt;
The URL of the LDAP server(s) to use for resolving user groups when using
the LdapGroupsMapping user to group mapping. Supports configuring multiple
LDAP servers via a comma-separated list.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.ldap.num.attempts&lt;/name&gt;
&lt;value&gt;6&lt;/value&gt;
&lt;description&gt;
This property is the number of attempts to be made for LDAP operations.
If this limit is exceeded, LdapGroupsMapping will return an empty
group list.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.ldap.num.attempts.before.failover&lt;/name&gt;
&lt;value&gt;2&lt;/value&gt;
&lt;description&gt;
This property is the number of attempts to be made for LDAP operations
using a single LDAP instance. If multiple LDAP servers are configured
and this number of failed operations is reached, we will switch to the
next LDAP server. The configuration for the overall number of attempts
will still be respected, failover will thus be performed only if this
property is less than hadoop.security.group.mapping.ldap.num.attempts.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
</section></section><section>
<h2><a name="Composite_Groups_Mapping"></a>Composite Groups Mapping</h2>
<p><code>CompositeGroupsMapping</code> works by enumerating a list of service providers in <code>hadoop.security.group.mapping.providers</code>. It get groups from each of the providers in the list one after the other. If <code>hadoop.security.group.mapping.providers.combined</code> is <code>true</code>, merge the groups returned by all providers; otherwise, return the groups in the first successful provider. See the following section for a sample configuration.</p><section>
<h3><a name="Multiple_group_mapping_providers_configuration_sample"></a>Multiple group mapping providers configuration sample</h3>
<p>This sample illustrates a typical use case for <code>CompositeGroupsMapping</code> where Hadoop authentication uses MIT Kerberos which trusts an AD realm. In this case, service principals such as hdfs, mapred, hbase, hive, oozie and etc can be placed in MIT Kerberos, but end users are just from the trusted AD. For the service principals, <code>ShellBasedUnixGroupsMapping</code> provider can be used to query their groups for efficiency, and for end users, <code>LdapGroupsMapping</code> provider can be used. This avoids to add group entries in AD for service principals when only using <code>LdapGroupsMapping</code> provider. In case multiple ADs are involved and trusted by the MIT Kerberos, <code>LdapGroupsMapping</code> provider can be used multiple times with different AD specific configurations. This sample also shows how to do that. Here are the necessary configurations.</p>
<div class="source">
<div class="source">
<pre> &lt;name&gt;hadoop.security.group.mapping&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.security.CompositeGroupsMapping&lt;/value&gt;
&lt;description&gt;
Class for user to group mapping (get groups for a given user) for ACL, which
makes use of other multiple providers to provide the service.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.providers&lt;/name&gt;
&lt;value&gt;shell4services,ad4usersX,ad4usersY&lt;/value&gt;
&lt;description&gt;
Comma separated of names of other providers to provide user to group mapping.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.providers.combined&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;description&gt;
true or false to indicate whether groups from the providers are combined or not. The default value is true
If true, then all the providers will be tried to get groups and all the groups are combined to return as
the final results. Otherwise, providers are tried one by one in the configured list order, and if any
groups are retrieved from any provider, then the groups will be returned without trying the left ones.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.provider.shell4services&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.security.ShellBasedUnixGroupsMapping&lt;/value&gt;
&lt;description&gt;
Class for group mapping provider named by 'shell4services'. The name can then be referenced
by hadoop.security.group.mapping.providers property.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.provider.ad4usersX&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.security.LdapGroupsMapping&lt;/value&gt;
&lt;description&gt;
Class for group mapping provider named by 'ad4usersX'. The name can then be referenced
by hadoop.security.group.mapping.providers property.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.provider.ad4usersY&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.security.LdapGroupsMapping&lt;/value&gt;
&lt;description&gt;
Class for group mapping provider named by 'ad4usersY'. The name can then be referenced
by hadoop.security.group.mapping.providers property.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.provider.ad4usersX.ldap.url&lt;/name&gt;
&lt;value&gt;ldap://ad-host-for-users-X:389&lt;/value&gt;
&lt;description&gt;
ldap url for the provider named by 'ad4usersX'. Note this property comes from
'hadoop.security.group.mapping.ldap.url'.
&lt;/description&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;name&gt;hadoop.security.group.mapping.provider.ad4usersY.ldap.url&lt;/name&gt;
&lt;value&gt;ldap://ad-host-for-users-Y:389&lt;/value&gt;
&lt;description&gt;
ldap url for the provider named by 'ad4usersY'. Note this property comes from
'hadoop.security.group.mapping.ldap.url'.
&lt;/description&gt;
&lt;/property&gt;
</pre></div></div>
<p>You also need to configure other properties like hadoop.security.group.mapping.ldap.bind.password.file and etc. for ldap providers in the same way as above does.</p></section></section>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2023
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>