1075 lines
66 KiB
HTML
1075 lines
66 KiB
HTML
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
|
||
<!--
|
||
| Generated by Apache Maven Doxia at 2023-05-23
|
||
| Rendered using Apache Maven Stylus Skin 1.5
|
||
-->
|
||
<html xmlns="http://www.w3.org/1999/xhtml">
|
||
<head>
|
||
<title>Apache Hadoop 3.4.0-SNAPSHOT – Hadoop: YARN Federation</title>
|
||
<style type="text/css" media="all">
|
||
@import url("./css/maven-base.css");
|
||
@import url("./css/maven-theme.css");
|
||
@import url("./css/site.css");
|
||
</style>
|
||
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
|
||
<meta name="Date-Revision-yyyymmdd" content="20230523" />
|
||
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
|
||
</head>
|
||
<body class="composite">
|
||
<div id="banner">
|
||
<a href="http://hadoop.apache.org/" id="bannerLeft">
|
||
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
|
||
</a>
|
||
<a href="http://www.apache.org/" id="bannerRight">
|
||
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
|
||
</a>
|
||
<div class="clear">
|
||
<hr/>
|
||
</div>
|
||
</div>
|
||
<div id="breadcrumbs">
|
||
|
||
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
||
|
|
||
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
||
|
|
||
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
|
||
|
||
| Last Published: 2023-05-23
|
||
| Version: 3.4.0-SNAPSHOT
|
||
</div>
|
||
<div class="clear">
|
||
<hr/>
|
||
</div>
|
||
</div>
|
||
<div id="leftColumn">
|
||
<div id="navcolumn">
|
||
|
||
<h5>General</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../index.html">Overview</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
|
||
</li>
|
||
</ul>
|
||
<h5>Common</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
|
||
</li>
|
||
</ul>
|
||
<h5>HDFS</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
|
||
</li>
|
||
</ul>
|
||
<h5>MapReduce</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
|
||
</li>
|
||
</ul>
|
||
<h5>MapReduce REST APIs</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
|
||
</li>
|
||
</ul>
|
||
<h5>YARN</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
|
||
</li>
|
||
</ul>
|
||
<h5>YARN REST APIs</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
|
||
</li>
|
||
</ul>
|
||
<h5>YARN Service</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
|
||
</li>
|
||
</ul>
|
||
<h5>Hadoop Compatible File Systems</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
|
||
</li>
|
||
</ul>
|
||
<h5>Auth</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-auth/index.html">Overview</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-auth/Examples.html">Examples</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
|
||
</li>
|
||
</ul>
|
||
<h5>Tools</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
|
||
</li>
|
||
</ul>
|
||
<h5>Reference</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../api/index.html">Java API docs</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
|
||
</li>
|
||
</ul>
|
||
<h5>Configuration</h5>
|
||
<ul>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
|
||
</li>
|
||
<li class="none">
|
||
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
|
||
</li>
|
||
</ul>
|
||
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
|
||
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
|
||
</a>
|
||
|
||
</div>
|
||
</div>
|
||
<div id="bodyColumn">
|
||
<div id="contentBox">
|
||
<!---
|
||
Licensed under the Apache License, Version 2.0 (the "License");
|
||
you may not use this file except in compliance with the License.
|
||
You may obtain a copy of the License at
|
||
|
||
http://www.apache.org/licenses/LICENSE-2.0
|
||
|
||
Unless required by applicable law or agreed to in writing, software
|
||
distributed under the License is distributed on an "AS IS" BASIS,
|
||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||
See the License for the specific language governing permissions and
|
||
limitations under the License. See accompanying LICENSE file.
|
||
-->
|
||
<h1>Hadoop: YARN Federation</h1>
|
||
<ul>
|
||
<li><a href="#Purpose">Purpose</a></li>
|
||
<li><a href="#Architecture">Architecture</a>
|
||
<ul>
|
||
<li><a href="#YARN_Sub-cluster">YARN Sub-cluster</a></li>
|
||
<li><a href="#Router">Router</a></li>
|
||
<li><a href="#AMRMProxy">AMRMProxy</a></li>
|
||
<li><a href="#Global_Policy_Generator">Global Policy Generator</a></li>
|
||
<li><a href="#Federation_State-Store">Federation State-Store</a>
|
||
<ul>
|
||
<li><a href="#Sub-cluster_Membership">Sub-cluster Membership</a></li>
|
||
<li><a href="#Application.E2.80.99s_Home_Sub-cluster">Application’s Home Sub-cluster</a></li></ul></li>
|
||
<li><a href="#Federation_Policy_Store">Federation Policy Store</a></li></ul></li>
|
||
<li><a href="#Running_Applications_across_Sub-Clusters">Running Applications across Sub-Clusters</a></li>
|
||
<li><a href="#Configuration">Configuration</a>
|
||
<ul>
|
||
<li><a href="#EVERYWHERE:">EVERYWHERE:</a>
|
||
<ul>
|
||
<li><a href="#State-Store:">State-Store:</a></li>
|
||
<li><a href="#Optional:">Optional:</a></li></ul></li>
|
||
<li><a href="#ON_RMs:">ON RMs:</a></li>
|
||
<li><a href="#ON_ROUTER:">ON ROUTER:</a></li>
|
||
<li><a href="#ON_NMs:">ON NMs:</a></li></ul></li>
|
||
<li><a href="#Running_a_Sample_Job">Running a Sample Job</a></li></ul>
|
||
<section>
|
||
<h2><a name="Purpose"></a>Purpose</h2>
|
||
<p>YARN is known to scale to thousands of nodes. The scalability of <a href="./YARN.html">YARN</a> is determined by the Resource Manager, and is proportional to number of nodes, active applications, active containers, and frequency of heartbeat (of both nodes and applications). Lowering heartbeat can provide scalability increase, but is detrimental to utilization (see old Hadoop 1.x experience). This document described a federation-based approach to scale a single YARN cluster to tens of thousands of nodes, by federating multiple YARN sub-clusters. The proposed approach is to divide a large (10-100k nodes) cluster into smaller units called sub-clusters, each with its own YARN RM and compute nodes. The federation system will stitch these sub-clusters together and make them appear as one large YARN cluster to the applications. The applications running in this federated environment will see a single massive YARN cluster and will be able to schedule tasks on any node of the federated cluster. Under the hood, the federation system will negotiate with sub-clusters resource managers and provide resources to the application. The goal is to allow an individual job to “span” sub-clusters seamlessly.</p>
|
||
<p>This design is structurally scalable, as we bound the number of nodes each RM is responsible for, and appropriate policies, will try to ensure that the majority of applications will reside within a single sub-cluster, thus the number of applications each RM will see is also bounded. This means we could almost linearly scale, by simply adding sub-clusters (as very little coordination is needed across them). This architecture can provide very tight enforcement of scheduling invariants within each sub-cluster (simply inherits from YARN), while continuous rebalancing across subcluster will enforce (less strictly) that these properties are also respected at a global level (e.g., if a sub-cluster loses a large number of nodes, we could re-map queues to other sub-clusters to ensure users running on the impaired sub-cluster are not unfairly affected).</p>
|
||
<p>Federation is designed as a “layer” atop of existing YARN codebase, with limited changes in the core YARN mechanisms.</p>
|
||
<p>Assumptions:</p>
|
||
<ul>
|
||
|
||
<li>We assume reasonably good connectivity across sub-clusters (e.g., we are not looking to federate across DC yet, though future investigations of this are not excluded).</li>
|
||
<li>We rely on HDFS federation (or equivalently scalable DFS solutions) to take care of scalability of the store side.</li>
|
||
</ul></section><section>
|
||
<h2><a name="Architecture"></a>Architecture</h2>
|
||
<p>OSS YARN has been known to scale up to about few thousand nodes. The proposed architecture leverages the notion of federating a number of such smaller YARN clusters, referred to as sub-clusters, into a larger federated YARN cluster comprising of tens of thousands of nodes. The applications running in this federated environment see a unified large YARN cluster and will be able to schedule tasks on any nodes in the cluster. Under the hood, the federation system will negotiate with sub-clusters RMs and provide resources to the application. The logical architecture in Figure 1 shows the main components that comprise the federated cluster, which are described below.</p>
|
||
<p><img src="./images/federation_architecture.png" alt="YARN Federation Architecture | width=800" /></p><section>
|
||
<h3><a name="YARN_Sub-cluster"></a>YARN Sub-cluster</h3>
|
||
<p>A sub-cluster is a YARN cluster with up to a few thousand nodes. The exact size of the sub-cluster will be determined considering ease of deployment/maintenance, alignment with network or availability zones and general best practices.</p>
|
||
<p>The sub-cluster YARN RM will run with work-preserving high-availability turned-on, i.e., we should be able to tolerate YARN RM, NM failures with minimal disruption. If the entire sub-cluster is compromised, external mechanisms will ensure that jobs are resubmitted in a separate sub-cluster (this could eventually be included in the federation design).</p>
|
||
<p>Sub-cluster is also the scalability unit in a federated environment. We can scale out the federated environment by adding one or more sub-clusters.</p>
|
||
<p><i>Note</i>: by design each sub-cluster is a fully functional YARN RM, and its contribution to the federation can be set to be only a fraction of its overall capacity, i.e. a sub-cluster can have a “partial” commitment to the federation, while retaining the ability to give out part of its capacity in a completely local way.</p></section><section>
|
||
<h3><a name="Router"></a>Router</h3>
|
||
<p>YARN applications are submitted to one of the Routers, which in turn applies a routing policy (obtained from the Policy Store), queries the State Store for the sub-cluster URL and redirects the application submission request to the appropriate sub-cluster RM. We call the sub-cluster where the job is started the “home sub-cluster”, and we call “secondary sub-clusters” all other sub-cluster a job is spanning on. The Router exposes the ApplicationClientProtocol to the outside world, transparently hiding the presence of multiple RMs. To achieve this the Router also persists the mapping between the application and its home sub-cluster into the State Store. This allows Routers to be soft-state while supporting user requests cheaply, as any Router can recover this application to home sub-cluster mapping and direct requests to the right RM without broadcasting them. For performance caching and session stickiness might be advisable. The state of the federation (including applications and nodes) is exposed through the Web UI.</p></section><section>
|
||
<h3><a name="AMRMProxy"></a>AMRMProxy</h3>
|
||
<p>The AMRMProxy is a key component to allow the application to scale and run across sub-clusters. The AMRMProxy runs on all the NM machines and acts as a proxy to the YARN RM for the AMs by implementing the ApplicationMasterProtocol. Applications will not be allowed to communicate with the sub-cluster RMs directly. They are forced by the system to connect only to the AMRMProxy endpoint, which would provide transparent access to multiple YARN RMs (by dynamically routing/splitting/merging the communications). At any one time, a job can span across one home sub-cluster and multiple secondary sub-clusters, but the policies operating in the AMRMProxy try to limit the footprint of each job to minimize overhead on the scheduling infrastructure (more in section on scalability/load). The interceptor chain architecture of the ARMMProxy is showing in figure.</p>
|
||
<p><img src="./images/amrmproxy_architecture.png" alt="Architecture of the AMRMProxy interceptor chain | width=800" /></p>
|
||
<p><i>Role of AMRMProxy</i></p>
|
||
<ol style="list-style-type: decimal">
|
||
|
||
<li>Protect the sub-cluster YARN RMs from misbehaving AMs. The AMRMProxy can prevent DDOS attacks by throttling/killing AMs that are asking too many resources.</li>
|
||
<li>Mask the multiple YARN RMs in the cluster, and can transparently allow the AM to span across sub-clusters. All container allocations are done by the YARN RM framework that consists of the AMRMProxy fronting the home and other sub-cluster RMs.</li>
|
||
<li>Intercepts all the requests, thus it can enforce application quotas, which would not be enforceable by sub-cluster RM (as each only see a fraction of the AM requests).</li>
|
||
<li>The AMRMProxy can enforce load-balancing / overflow policies.</li>
|
||
</ol></section><section>
|
||
<h3><a name="Global_Policy_Generator"></a>Global Policy Generator</h3>
|
||
<p>Global Policy Generator overlooks the entire federation and ensures that the system is configured and tuned properly all the time. A key design point is that the cluster availability does not depend on an always-on GPG. The GPG operates continuously but out-of-band from all cluster operations, and provide us with a unique vantage point, that allows to enforce global invariants, affect load balancing, trigger draining of sub-clusters that will undergo maintenance, etc. More precisely the GPG will update user capacity allocation-to-subcluster mappings, and more rarely change the policies that run in Routers, AMRMProxy (and possible RMs).</p>
|
||
<p>In case the GPG is not-available, cluster operations will continue as of the last time the GPG published policies, and while a long-term unavailability might mean some of the desirable properties of balance, optimal cluster utilization and global invariants might drift away, compute and access to data will not be compromised.</p>
|
||
<p><i>NOTE</i>: In the current implementation the GPG is a manual tuning process, simply exposed via a CLI (YARN-3657).</p>
|
||
<p>This part of the federation system is part of future work in <a class="externalLink" href="https://issues.apache.org/jira/browse/YARN-5597">YARN-5597</a>.</p></section><section>
|
||
<h3><a name="Federation_State-Store"></a>Federation State-Store</h3>
|
||
<p>The Federation State defines the additional state that needs to be maintained to loosely couple multiple individual sub-clusters into a single large federated cluster. This includes the following information:</p><section>
|
||
<h4><a name="Sub-cluster_Membership"></a>Sub-cluster Membership</h4>
|
||
<p>The member YARN RMs continuously heartbeat to the state store to keep alive and publish their current capability/load information. This information is used by the Global Policy Generator (GPG) to make proper policy decisions. Also this information can be used by routers to select the best home sub-cluster. This mechanism allows us to dynamically grow/shrink the “cluster fleet” by adding or removing sub-clusters. This also allows for easy maintenance of each sub-cluster. This is new functionality that needs to be added to the YARN RM but the mechanisms are well understood as it’s similar to individual YARN RM HA.</p></section><section>
|
||
<h4><a name="Application.E2.80.99s_Home_Sub-cluster"></a>Application’s Home Sub-cluster</h4>
|
||
<p>The sub-cluster on which the Application Master (AM) runs is called the Application’s “home sub-cluster”. The AM is not limited to resources from the home sub-cluster but can also request resources from other sub-clusters, referred to as secondary sub-clusters. The federated environment will be configured and tuned periodically such that when an AM is placed on a sub-cluster, it should be able to find most of the resources on the home sub-cluster. Only in certain cases it should need to ask for resources from other sub-clusters.</p></section></section><section>
|
||
<h3><a name="Federation_Policy_Store"></a>Federation Policy Store</h3>
|
||
<p>The federation Policy Store is a logically separate store (while it might be backed by the same physical component), which contains information about how applications and resource requests are routed to different sub-clusters. The current implementation provides several policies, ranging from random/hashing/round-robin/priority to more sophisticated ones which account for sub-cluster load, and request locality needs.</p></section></section><section>
|
||
<h2><a name="Running_Applications_across_Sub-Clusters"></a>Running Applications across Sub-Clusters</h2>
|
||
<p>When an application is submitted, the system will determine the most appropriate sub-cluster to run the application, which we call as the application’s home sub-cluster. All the communications from the AM to the RM will be proxied via the AMRMProxy running locally on the AM machine. AMRMProxy exposes the same ApplicationMasterService protocol endpoint as the YARN RM. The AM can request containers using the locality information exposed by the storage layer. In ideal case, the application will be placed on a sub-cluster where all the resources and data required by the application will be available, but if it does need containers on nodes in other sub-clusters, AMRMProxy will negotiate with the RMs of those sub-clusters transparently and provide the resources to the application, thereby enabling the application to view the entire federated environment as one massive YARN cluster. AMRMProxy, Global Policy Generator (GPG) and Router work together to make this happen seamlessly.</p>
|
||
<p><img src="./images/federation_sequence_diagram.png" alt="Federation Sequence Diagram | width=800" /></p>
|
||
<p>The figure shows a sequence diagram for the following job execution flow:</p>
|
||
<ol style="list-style-type: decimal">
|
||
|
||
<li>The Router receives an application submission request that is compliant with the YARN Application Client Protocol.</li>
|
||
<li>The router interrogates a routing table / policy to choose the “home RM” for the job (the policy configuration is received from the state-store on heartbeat).</li>
|
||
<li>The router queries the membership state to determine the endpoint of the home RM.</li>
|
||
<li>The router then redirects the application submission request to the home RM.</li>
|
||
<li>The router updates the application state with the home sub-cluster identifier.</li>
|
||
<li>Once the application is submitted to the home RM, the stock YARN flow is triggered, i.e. the application is added to the scheduler queue and its AM started in the home sub-cluster, on the first NodeManager that has available resources. a. During this process, the AM environment is modified by indicating that the address of the AMRMProxy as the YARN RM to talk to. b. The security tokens are also modified by the NM when launching the AM, so that the AM can only talk with the AMRMProxy. Any future communication from AM to the YARN RM is mediated by the AMRMProxy.</li>
|
||
<li>The AM will then request containers using the locality information exposed by HDFS.</li>
|
||
<li>Based on a policy the AMRMProxy can impersonate the AM on other sub-clusters, by submitting an Unmanaged AM, and by forwarding the AM heartbeats to relevant sub-clusters. a. Federation supports multiple application attempts with AMRMProxy HA. AM containers will have different attempt id in home sub-cluster, but the same Unmanaged AM in secondaries will be used across attempts. b. When AMRMProxy HA is enabled, UAM token will be stored in Yarn Registry. In the registerApplicationMaster call of each application attempt, AMRMProxy will go fetch existing UAM tokens from registry (if any) and re-attached to the existing UAMs.</li>
|
||
<li>The AMRMProxy will use both locality information and a pluggable policy configured in the state-store to decide whether to forward the resource requests received by the AM to the Home RM or to one (or more) Secondary RMs. In Figure 1, we show the case in which the AMRMProxy decides to forward the request to the secondary RM.</li>
|
||
<li>The secondary RM will provide the AMRMProxy with valid container tokens to start a new container on some node in its sub-cluster. This mechanism ensures that each sub-cluster uses its own security tokens and avoids the need for a cluster wide shared secret to create tokens.</li>
|
||
<li>The AMRMProxy forwards the allocation response back to the AM.</li>
|
||
<li>The AM starts the container on the target NodeManager (on sub-cluster 2) using the standard YARN protocols.</li>
|
||
</ol></section><section>
|
||
<h2><a name="Configuration"></a>Configuration</h2>
|
||
<p>To configure the <code>YARN</code> to use the <code>Federation</code>, set the following property in the <b>conf/yarn-site.xml</b>:</p><section>
|
||
<h3><a name="EVERYWHERE:"></a>EVERYWHERE:</h3>
|
||
<p>These are common configurations that should appear in the <b>conf/yarn-site.xml</b> at each machine in the federation.</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.enabled</code> </td>
|
||
<td align="left"> <code>true</code> </td>
|
||
<td align="left"> Whether federation is enabled or not </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.resourcemanager.cluster-id</code> </td>
|
||
<td align="left"> <code><unique-subcluster-id></code> </td>
|
||
<td align="left"> The unique subcluster identifier for this RM (same as the one used for HA). </td></tr>
|
||
</tbody>
|
||
</table><section>
|
||
<h4><a name="State-Store:"></a>State-Store:</h4>
|
||
<p>Currently, we support ZooKeeper and SQL based implementations of the state-store.</p>
|
||
<p><b>Note:</b> The State-Store implementation must always be overwritten with one of the below.</p>
|
||
<p>ZooKeeper: one must set the ZooKeeper settings for Hadoop:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.state-store.class</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.federation.store.impl.ZookeeperFederationStateStore</code> </td>
|
||
<td align="left"> The type of state-store to use. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>hadoop.zk.address</code> </td>
|
||
<td align="left"> <code>host:port</code> </td>
|
||
<td align="left"> The address for the ZooKeeper ensemble. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>SQL: one must setup the following parameters:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.state-store.class</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.federation.store.impl.SQLFederationStateStore</code> </td>
|
||
<td align="left"> The type of state-store to use. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.state-store.sql.url</code> </td>
|
||
<td align="left"> <code>jdbc:mysql://<host>:<port>/FederationStateStore</code> </td>
|
||
<td align="left"> For SQLFederationStateStore the name of the DB where the state is stored. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.state-store.sql.jdbc-class</code> </td>
|
||
<td align="left"> <code>com.mysql.jdbc.jdbc2.optional.MysqlDataSource</code> </td>
|
||
<td align="left"> For SQLFederationStateStore the jdbc class to use. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.state-store.sql.username</code> </td>
|
||
<td align="left"> <code><dbuser></code> </td>
|
||
<td align="left"> For SQLFederationStateStore the username for the DB connection. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.state-store.sql.password</code> </td>
|
||
<td align="left"> <code><dbpass></code> </td>
|
||
<td align="left"> For SQLFederationStateStore the password for the DB connection. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>We provide scripts for <b>MySQL</b> and <b>Microsoft SQL Server</b>.</p>
|
||
<blockquote>
|
||
|
||
<p>MySQL</p>
|
||
</blockquote>
|
||
<p>For MySQL, one must download the latest jar version 5.x from <a class="externalLink" href="https://mvnrepository.com/artifact/mysql/mysql-connector-java">MVN Repository</a> and add it to the CLASSPATH. Then the DB schema is created by executing the following SQL scripts in the database:</p>
|
||
<ol style="list-style-type: decimal">
|
||
|
||
<li><b>sbin/FederationStateStore/MySQL/FederationStateStoreDatabase.sql</b>.</li>
|
||
<li><b>sbin/FederationStateStore/MySQL/FederationStateStoreUser.sql</b>.</li>
|
||
<li><b>sbin/FederationStateStore/MySQL/FederationStateStoreTables.sql</b>.</li>
|
||
<li><b>sbin/FederationStateStore/MySQL/FederationStateStoreStoredProcs.sql</b>.</li>
|
||
</ol>
|
||
<p>In the same directory we provide scripts to drop the Stored Procedures, the Tables, the User and the Database.</p>
|
||
<p><b>Note:</b> the FederationStateStoreUser.sql defines a default user/password for the DB that you are <b>highly encouraged</b> to set this to a proper strong password.</p>
|
||
<p><b>The versions supported by MySQL are MySQL 5.7 and above:</b></p>
|
||
<ol style="list-style-type: decimal">
|
||
|
||
<li>MySQL 5.7</li>
|
||
<li>MySQL 8.0</li>
|
||
</ol>
|
||
<blockquote>
|
||
|
||
<p>Microsoft SQL Server</p>
|
||
</blockquote>
|
||
<p>For SQL-Server, the process is similar, but the jdbc driver is already included. SQL-Server scripts are located in <b>sbin/FederationStateStore/SQLServer/</b>.</p>
|
||
<p><b>The versions supported by SQL-Server are SQL Server 2008 R2 and above:</b></p>
|
||
<ol style="list-style-type: decimal">
|
||
|
||
<li>SQL Server 2008 R2 Enterprise</li>
|
||
<li>SQL Server 2012 Enterprise</li>
|
||
<li>SQL Server 2016 Enterprise</li>
|
||
<li>SQL Server 2017 Enterprise</li>
|
||
<li>SQL Server 2019 Enterprise</li>
|
||
</ol></section><section>
|
||
<h4><a name="Optional:"></a>Optional:</h4>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.failover.enabled</code> </td>
|
||
<td align="left"> <code>true</code> </td>
|
||
<td align="left"> Whether should retry considering RM failover within each subcluster. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.blacklist-subclusters</code> </td>
|
||
<td align="left"> <code><subcluster-id></code> </td>
|
||
<td align="left"> A list of black-listed sub-clusters, useful to disable a sub-cluster </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.policy-manager</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.federation.policies.manager.WeightedLocalityPolicyManager</code> </td>
|
||
<td align="left"> The choice of policy manager determines how Applications and ResourceRequests are routed through the system. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.policy-manager-params</code> </td>
|
||
<td align="left"> <code><binary></code> </td>
|
||
<td align="left"> The payload that configures the policy. In our example a set of weights for router and amrmproxy policies. This is typically generated by serializing a policymanager that has been configured programmatically, or by populating the state-store with the .json serialized form of it. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.subcluster-resolver.class</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.federation.resolver.DefaultSubClusterResolverImpl</code> </td>
|
||
<td align="left"> The class used to resolve which subcluster a node belongs to, and which subcluster(s) a rack belongs to. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.machine-list</code> </td>
|
||
<td align="left"> <code><path of machine-list file></code> </td>
|
||
<td align="left"> Path of machine-list file used by <code>SubClusterResolver</code>. Each line of the file is a node with sub-cluster and rack information. Below is the example: <br /> <br /> node1, subcluster1, rack1 <br /> node2, subcluster2, rack1 <br /> node3, subcluster3, rack2 <br /> node4, subcluster3, rack2 </td></tr>
|
||
</tbody>
|
||
</table></section></section><section>
|
||
<h3><a name="ON_RMs:"></a>ON RMs:</h3>
|
||
<p>These are extra configurations that should appear in the <b>conf/yarn-site.xml</b> at each ResourceManager.</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.resourcemanager.epoch</code> </td>
|
||
<td align="left"> <code><unique-epoch></code> </td>
|
||
<td align="left"> The seed value for the epoch. This is used to guarantee uniqueness of container-IDs generate by different RMs. It must therefore be unique among sub-clusters and <code>well-spaced</code> to allow for failures which increment epoch. Increments of 1000 allow for a large number of sub-clusters and practically ensure near-zero chance of collisions (a clash will only happen if a container is still alive for 1000 restarts of one RM, while the next RM never restarted, and an app requests more containers). </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>Optional:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.state-store.heartbeat-interval-secs</code> </td>
|
||
<td align="left"> <code>60</code> </td>
|
||
<td align="left"> The rate at which RMs report their membership to the federation to the central state-store. </td></tr>
|
||
</tbody>
|
||
</table></section><section>
|
||
<h3><a name="ON_ROUTER:"></a>ON ROUTER:</h3>
|
||
<p>These are extra configurations that should appear in the <b>conf/yarn-site.xml</b> at each Router.</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.router.bind-host</code> </td>
|
||
<td align="left"> <code>0.0.0.0</code> </td>
|
||
<td align="left"> Host IP to bind the router to. The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.router.*.address respectively. This is most useful for making Router listen to all interfaces by setting to 0.0.0.0. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"> <code>yarn.router.clientrm.interceptor-class.pipeline</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor</code> </td>
|
||
<td align="left"> A comma-separated list of interceptor classes to be run at the router when interfacing with the client. The last step of this pipeline must be the Federation Client Interceptor. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<blockquote>
|
||
|
||
<p>Enable ApplicationSubmissionContextInterceptor</p>
|
||
</blockquote>
|
||
<ul>
|
||
|
||
<li>If the <code>FederationStateStore</code> is configured with <code>Zookpeer</code> storage, the app information will be stored in <code>Zookpeer</code>. If the size of the app information exceeds <code>1MB</code>, <code>Zookpeer</code> may fail. <code>ApplicationSubmissionContextInterceptor</code> will check the size of <code>ApplicationSubmissionContext</code>, if the size exceeds the limit(default 1MB), an exception will be thrown.</li>
|
||
<li>The size of the ApplicationSubmissionContext of the application application_123456789_0001 is above the limit. Size = 1.02 MB.</li>
|
||
<li>
|
||
|
||
<p>The required configuration is as follows:</p>
|
||
</li>
|
||
</ul>
|
||
|
||
<div class="source">
|
||
<div class="source">
|
||
<pre><property>
|
||
<name>yarn.router.clientrm.interceptor-class.pipeline</name>
|
||
<value>org.apache.hadoop.yarn.server.router.clientrm.PassThroughClientRequestInterceptor,
|
||
org.apache.hadoop.yarn.server.router.clientrm.ApplicationSubmissionContextInterceptor,
|
||
org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor</value>
|
||
</property>
|
||
<property>
|
||
<name>yarn.router.asc-interceptor-max-size</name>
|
||
<value>1MB</value>
|
||
</property>
|
||
</pre></div></div>
|
||
|
||
<p>Optional:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.router.hostname</code> </td>
|
||
<td align="left"> <code>0.0.0.0</code> </td>
|
||
<td align="left"> Router host name.</td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.router.clientrm.address</code> </td>
|
||
<td align="left"> <code>0.0.0.0:8050</code> </td>
|
||
<td align="left"> Router client address. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.router.webapp.address</code> </td>
|
||
<td align="left"> <code>0.0.0.0:8089</code> </td>
|
||
<td align="left"> Webapp address at the router. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.router.admin.address</code> </td>
|
||
<td align="left"> <code>0.0.0.0:8052</code> </td>
|
||
<td align="left"> Admin address at the router. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.router.webapp.https.address</code> </td>
|
||
<td align="left"> <code>0.0.0.0:8091</code> </td>
|
||
<td align="left"> Secure webapp address at the router. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.router.submit.retry</code> </td>
|
||
<td align="left"> <code>3</code> </td>
|
||
<td align="left"> The number of retries in the router before we give up. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.router.submit.interval.time</code> </td>
|
||
<td align="left"> <code>10ms</code> </td>
|
||
<td align="left"> The interval between two retry, the default value is 10ms. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.statestore.max-connections</code> </td>
|
||
<td align="left"> <code>10</code> </td>
|
||
<td align="left"> This is the maximum number of parallel connections each Router makes to the state-store. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.federation.cache-ttl.secs</code> </td>
|
||
<td align="left"> <code>60</code> </td>
|
||
<td align="left"> The Router caches informations, and this is the time to leave before the cache is invalidated. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"><code>yarn.federation.cache.class</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.federation.cache.FederationJCache</code> </td>
|
||
<td align="left"> The Router caches informations, We can configure the Cache implementation and the default implementation is FederationJCache.</td></tr>
|
||
<tr class="b">
|
||
<td align="left"><code>yarn.router.webapp.interceptor-class.pipeline</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.router.webapp.FederationInterceptorREST</code> </td>
|
||
<td align="left"> A comma-separated list of interceptor classes to be run at the router when interfacing with the client via REST interface. The last step of this pipeline must be the Federation Interceptor REST. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>Security:</p>
|
||
<p>Kerberos supported in federation.</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.router.keytab.file</code> </td>
|
||
<td align="left"> </td>
|
||
<td align="left"> The keytab file used by router to login as its service principal. The principal name is configured with ‘yarn.router.kerberos.principal’.</td></tr>
|
||
<tr class="a">
|
||
<td align="left"> <code>yarn.router.kerberos.principal</code> </td>
|
||
<td align="left"> </td>
|
||
<td align="left"> The Router service principal. This is typically set to <a class="externalLink" href="mailto:router/_HOST@REALM.TLD">router/_HOST@REALM.TLD</a>. Each Router will substitute _HOST with its own fully qualified hostname at startup. The _HOST placeholder allows using the same configuration setting on all Routers in setup. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.router.kerberos.principal.hostname</code> </td>
|
||
<td align="left"> </td>
|
||
<td align="left"> Optional. The hostname for the Router containing this configuration file. Will be different for each machine. Defaults to current hostname. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>Enabling CORS support:</p>
|
||
<p>To enable cross-origin support (CORS) for the Yarn Router, please set the following configuration parameters:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th> Property </th>
|
||
<th> Example </th>
|
||
<th> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td> <code>hadoop.http.filter.initializers</code> </td>
|
||
<td> <code>org.apache.hadoop.security.HttpCrossOriginFilterInitializer</code> </td>
|
||
<td> Optional. Set the filter to HttpCrossOriginFilterInitializer, Configure this parameter in core-site.xml. </td></tr>
|
||
<tr class="a">
|
||
<td> <code>yarn.router.webapp.cross-origin.enabled</code> </td>
|
||
<td> <code>true</code> </td>
|
||
<td> Optional. Enable/disable CORS filter.Configure this parameter in yarn-site.xml. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>Cache:</p>
|
||
<p>Cache is not enabled by default. When we set the <code>yarn.federation.cache-ttl.secs</code> parameter and its value is greater than 0, Cache will be enabled. We currently provide two Cache implementations: <code>JCache</code> and <code>GuavaCache</code>.</p>
|
||
<blockquote>
|
||
|
||
<p>JCache</p>
|
||
</blockquote>
|
||
<p>We used <code>geronimo-jcache</code>,<code>geronimo-jcache</code> is an implementation of the Java Caching API (JSR-107) specification provided by the Apache Geronimo project. It defines a standardized implementation of the JCache API, allowing developers to use the same API to access different caching implementations. In YARN Federation we use a combination of <code>geronimo-jcache</code> and <code>Ehcache</code>. If we want to use JCache, we can configure <code>yarn.federation.cache.class</code> to <code>org.apache.hadoop.yarn.server.federation.cache.FederationJCache</code>.</p>
|
||
<blockquote>
|
||
|
||
<p>GuavaCache</p>
|
||
</blockquote>
|
||
<p>This is a Cache implemented based on the Guava framework. If we want to use it, we can configure <code>yarn.federation.cache.class</code> to <code>org.apache.hadoop.yarn.server.federation.cache.FederationGuavaCache</code>.</p></section><section>
|
||
<h3><a name="ON_NMs:"></a>ON NMs:</h3>
|
||
<p>These are extra configurations that should appear in the <b>conf/yarn-site.xml</b> at each NodeManager.</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.nodemanager.amrmproxy.enabled</code> </td>
|
||
<td align="left"> <code>true</code> </td>
|
||
<td align="left"> Whether or not the AMRMProxy is enabled. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"> <code>yarn.nodemanager.amrmproxy.interceptor-class.pipeline</code> </td>
|
||
<td align="left"> <code>org.apache.hadoop.yarn.server.nodemanager.amrmproxy.FederationInterceptor</code> </td>
|
||
<td align="left"> A comma-separated list of interceptors to be run at the amrmproxy. For federation the last step in the pipeline should be the FederationInterceptor. </td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>Optional:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.nodemanager.amrmproxy.ha.enable</code> </td>
|
||
<td align="left"> <code>true</code> </td>
|
||
<td align="left"> Whether or not the AMRMProxy HA is enabled for multiple application attempt support. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"> <code>yarn.federation.statestore.max-connections</code> </td>
|
||
<td align="left"> <code>1</code> </td>
|
||
<td align="left"> The maximum number of parallel connections from each AMRMProxy to the state-store. This value is typically lower than the router one, since we have many AMRMProxy that could burn-through many DB connections quickly. </td></tr>
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.federation.cache-ttl.secs</code> </td>
|
||
<td align="left"> <code>300</code> </td>
|
||
<td align="left"> The time to leave for the AMRMProxy cache. Typically larger than at the router, as the number of AMRMProxy is large, and we want to limit the load to the centralized state-store. </td></tr>
|
||
</tbody>
|
||
</table></section></section><section>
|
||
<h2><a name="Running_a_Sample_Job"></a>Running a Sample Job</h2>
|
||
<p>In order to submit jobs to a Federation cluster one must create a separate set of configs for the client from which jobs will be submitted. In these, the <b>conf/yarn-site.xml</b> should have the following additional configurations:</p>
|
||
<table border="0" class="bodyTable">
|
||
<thead>
|
||
|
||
<tr class="a">
|
||
<th align="left"> Property </th>
|
||
<th align="left"> Example </th>
|
||
<th align="left"> Description </th></tr>
|
||
</thead><tbody>
|
||
|
||
<tr class="b">
|
||
<td align="left"> <code>yarn.resourcemanager.address</code> </td>
|
||
<td align="left"> <code><router_host>:8050</code> </td>
|
||
<td align="left"> Redirects jobs launched at the client to the router’s client RM port. </td></tr>
|
||
<tr class="a">
|
||
<td align="left"> <code>yarn.resourcemanager.scheduler.address</code> </td>
|
||
<td align="left"> <code>localhost:8049</code> </td>
|
||
<td align="left"> Redirects jobs to the federation AMRMProxy port.</td></tr>
|
||
</tbody>
|
||
</table>
|
||
<p>Any YARN jobs for the cluster can be submitted from the client configurations described above. In order to launch a job through federation, first start up all the clusters involved in the federation as described <a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">here</a>. Next, start up the router on the router machine with the following command:</p>
|
||
|
||
<div class="source">
|
||
<div class="source">
|
||
<pre> $HADOOP_HOME/bin/yarn --daemon start router
|
||
</pre></div></div>
|
||
|
||
<p>Now with $HADOOP_CONF_DIR pointing to the client configurations folder that is described above, run your job the usual way. The configurations in the client configurations folder described above will direct the job to the router’s client RM port where the router should be listening after being started. Here is an example run of a Pi job on a federation cluster from the client:</p>
|
||
|
||
<div class="source">
|
||
<div class="source">
|
||
<pre> $HADOOP_HOME/bin/yarn jar hadoop-mapreduce-examples-3.0.0.jar pi 16 1000
|
||
</pre></div></div>
|
||
|
||
<p>This job is submitted to the router which as described above, uses a generated policy from the <a href="#Global_Policy_Generator">GPG</a> to pick a home RM for the job to which it is submitted.</p>
|
||
<p>The output from this particular example job should be something like:</p>
|
||
|
||
<div class="source">
|
||
<div class="source">
|
||
<pre> 2017-07-13 16:29:25,055 INFO mapreduce.Job: Job job_1499988226739_0001 running in uber mode : false
|
||
2017-07-13 16:29:25,056 INFO mapreduce.Job: map 0% reduce 0%
|
||
2017-07-13 16:29:33,131 INFO mapreduce.Job: map 38% reduce 0%
|
||
2017-07-13 16:29:39,176 INFO mapreduce.Job: map 75% reduce 0%
|
||
2017-07-13 16:29:45,217 INFO mapreduce.Job: map 94% reduce 0%
|
||
2017-07-13 16:29:46,228 INFO mapreduce.Job: map 100% reduce 100%
|
||
2017-07-13 16:29:46,235 INFO mapreduce.Job: Job job_1499988226739_0001 completed successfully
|
||
.
|
||
.
|
||
.
|
||
Job Finished in 30.586 seconds
|
||
Estimated value of Pi is 3.14250000......
|
||
</pre></div></div>
|
||
|
||
<p>The state of the job can also be tracked on the Router Web UI at <code>routerhost:8089</code>. Note that no change in the code or recompilation of the input jar was required to use federation. Also, the output of this job is the exact same as it would be when run without federation. Also, in order to get the full benefit of federation, use a large enough number of mappers such that more than one cluster is required. That number happens to be 16 in the case of the above example.</p></section>
|
||
</div>
|
||
</div>
|
||
<div class="clear">
|
||
<hr/>
|
||
</div>
|
||
<div id="footer">
|
||
<div class="xright">
|
||
© 2008-2023
|
||
Apache Software Foundation
|
||
|
||
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
|
||
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
|
||
</div>
|
||
<div class="clear">
|
||
<hr/>
|
||
</div>
|
||
</div>
|
||
</body>
|
||
</html>
|