hadoop/hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html

789 lines
42 KiB
HTML

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<!--
| Generated by Apache Maven Doxia at 2023-02-22
| Rendered using Apache Maven Stylus Skin 1.5
-->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title>Apache Hadoop 3.4.0-SNAPSHOT &#x2013; ResourceManager Restart</title>
<style type="text/css" media="all">
@import url("./css/maven-base.css");
@import url("./css/maven-theme.css");
@import url("./css/site.css");
</style>
<link rel="stylesheet" href="./css/print.css" type="text/css" media="print" />
<meta name="Date-Revision-yyyymmdd" content="20230222" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body class="composite">
<div id="banner">
<a href="http://hadoop.apache.org/" id="bannerLeft">
<img src="http://hadoop.apache.org/images/hadoop-logo.jpg" alt="" />
</a>
<a href="http://www.apache.org/" id="bannerRight">
<img src="http://www.apache.org/images/asf_logo_wide.png" alt="" />
</a>
<div class="clear">
<hr/>
</div>
</div>
<div id="breadcrumbs">
<div class="xright"> <a href="http://wiki.apache.org/hadoop" class="externalLink">Wiki</a>
|
<a href="https://gitbox.apache.org/repos/asf/hadoop.git" class="externalLink">git</a>
|
<a href="http://hadoop.apache.org/" class="externalLink">Apache Hadoop</a>
&nbsp;| Last Published: 2023-02-22
&nbsp;| Version: 3.4.0-SNAPSHOT
</div>
<div class="clear">
<hr/>
</div>
</div>
<div id="leftColumn">
<div id="navcolumn">
<h5>General</h5>
<ul>
<li class="none">
<a href="../../index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SingleCluster.html">Single Node Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ClusterSetup.html">Cluster Setup</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CommandsManual.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FileSystemShell.html">FileSystem Shell</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Compatibility.html">Compatibility Specification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DownstreamDev.html">Downstream Developer's Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AdminCompatibilityGuide.html">Admin Compatibility Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/InterfaceClassification.html">Interface Classification</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/filesystem/index.html">FileSystem Specification</a>
</li>
</ul>
<h5>Common</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CLIMiniCluster.html">CLI Mini Cluster</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/FairCallQueue.html">Fair Call Queue</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/NativeLibraries.html">Native Libraries</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Superusers.html">Proxy User</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/RackAwareness.html">Rack Awareness</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/SecureMode.html">Secure Mode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/ServiceLevelAuth.html">Service Level Authorization</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/HttpAuthentication.html">HTTP Authentication</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/CredentialProviderAPI.html">Credential Provider API</a>
</li>
<li class="none">
<a href="../../hadoop-kms/index.html">Hadoop KMS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Tracing.html">Tracing</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellGuide.html">Unix Shell Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/registry/index.html">Registry</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/AsyncProfilerServlet.html">Async Profiler</a>
</li>
</ul>
<h5>HDFS</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">User Guide</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html">NameNode HA With QJM</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithNFS.html">NameNode HA With NFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ObserverNameNode.html">Observer NameNode</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/Federation.html">Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFs.html">ViewFs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ViewFsOverloadScheme.html">ViewFsOverloadScheme</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsSnapshots.html">Snapshots</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsEditsViewer.html">Edits Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsImageViewer.html">Image Viewer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsPermissionsGuide.html">Permissions and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsQuotaAdminGuide.html">Quotas and HDFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/LibHdfs.html">libhdfs (C API)</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/WebHDFS.html">WebHDFS (REST API)</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/index.html">HttpFS</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ShortCircuitLocalReads.html">Short Circuit Local Reads</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/CentralizedCacheManagement.html">Centralized Cache Management</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html">NFS Gateway</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsRollingUpgrade.html">Rolling Upgrade</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ExtendedAttributes.html">Extended Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html">Transparent Encryption</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html">Multihoming</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html">Storage Policies</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/MemoryStorage.html">Memory Storage Support</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/SLGUserGuide.html">Synthetic Load Generator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSErasureCoding.html">Erasure Coding</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HDFSDiskbalancer.html">Disk Balancer</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsUpgradeDomain.html">Upgrade Domain</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsDataNodeAdminGuide.html">DataNode Admin</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/HDFSRouterFederation.html">Router Federation</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/HdfsProvidedStorage.html">Provided Storage</a>
</li>
</ul>
<h5>MapReduce</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html">Tutorial</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html">Compatibility with 1.x</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/EncryptedShuffle.html">Encrypted Shuffle</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/PluggableShuffleAndPluggableSort.html">Pluggable Shuffle/Sort</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/DistributedCacheDeploy.html">Distributed Cache Deploy</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/SharedCacheSupport.html">Support for YARN Shared Cache</a>
</li>
</ul>
<h5>MapReduce REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapredAppMasterRest.html">MR Application Master</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-hs/HistoryServerRest.html">MR History Server</a>
</li>
</ul>
<h5>YARN</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YARN.html">Architecture</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnCommands.html">Commands Reference</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html">Capacity Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/FairScheduler.html">Fair Scheduler</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRestart.html">ResourceManager Restart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerHA.html">ResourceManager HA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceModel.html">Resource Model</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeLabel.html">Node Labels</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeAttributes.html">Node Attributes</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebApplicationProxy.html">Web Application Proxy</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html">Timeline Service V.2</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WritingYarnApplications.html">Writing YARN Applications</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnApplicationSecurity.html">YARN Application Security</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManager.html">NodeManager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/DockerContainers.html">Running Applications in Docker Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/RuncContainers.html">Running Applications in runC Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerCgroups.html">Using CGroups</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SecureContainer.html">Secure Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ReservationSystem.html">Reservation System</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/GracefulDecommission.html">Graceful Decommission</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/OpportunisticContainers.html">Opportunistic Containers</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/Federation.html">YARN Federation</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/SharedCache.html">Shared Cache</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingGpus.html">Using GPU</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/UsingFPGA.html">Using FPGA</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/PlacementConstraints.html">Placement Constraints</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/YarnUI2.html">YARN UI2</a>
</li>
</ul>
<h5>YARN REST APIs</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html">Introduction</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html">Resource Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/NodeManagerRest.html">Node Manager</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServer.html#Timeline_Server_REST_API_v1">Timeline Server</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/TimelineServiceV2.html#Timeline_Service_v.2_REST_API">Timeline Service V.2</a>
</li>
</ul>
<h5>YARN Service</h5>
<ul>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Overview.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/QuickStart.html">QuickStart</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/Concepts.html">Concepts</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/YarnServiceAPI.html">Yarn Service API</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/ServiceDiscovery.html">Service Discovery</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-site/yarn-service/SystemServices.html">System Services</a>
</li>
</ul>
<h5>Hadoop Compatible File Systems</h5>
<ul>
<li class="none">
<a href="../../hadoop-aliyun/tools/hadoop-aliyun/index.html">Aliyun OSS</a>
</li>
<li class="none">
<a href="../../hadoop-aws/tools/hadoop-aws/index.html">Amazon S3</a>
</li>
<li class="none">
<a href="../../hadoop-azure/index.html">Azure Blob Storage</a>
</li>
<li class="none">
<a href="../../hadoop-azure-datalake/index.html">Azure Data Lake Storage</a>
</li>
<li class="none">
<a href="../../hadoop-cos/cloud-storage/index.html">Tencent COS</a>
</li>
<li class="none">
<a href="../../hadoop-huaweicloud/cloud-storage/index.html">Huaweicloud OBS</a>
</li>
</ul>
<h5>Auth</h5>
<ul>
<li class="none">
<a href="../../hadoop-auth/index.html">Overview</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Examples.html">Examples</a>
</li>
<li class="none">
<a href="../../hadoop-auth/Configuration.html">Configuration</a>
</li>
<li class="none">
<a href="../../hadoop-auth/BuildingIt.html">Building</a>
</li>
</ul>
<h5>Tools</h5>
<ul>
<li class="none">
<a href="../../hadoop-streaming/HadoopStreaming.html">Hadoop Streaming</a>
</li>
<li class="none">
<a href="../../hadoop-archives/HadoopArchives.html">Hadoop Archives</a>
</li>
<li class="none">
<a href="../../hadoop-archive-logs/HadoopArchiveLogs.html">Hadoop Archive Logs</a>
</li>
<li class="none">
<a href="../../hadoop-distcp/DistCp.html">DistCp</a>
</li>
<li class="none">
<a href="../../hadoop-federation-balance/HDFSFederationBalance.html">HDFS Federation Balance</a>
</li>
<li class="none">
<a href="../../hadoop-gridmix/GridMix.html">GridMix</a>
</li>
<li class="none">
<a href="../../hadoop-rumen/Rumen.html">Rumen</a>
</li>
<li class="none">
<a href="../../hadoop-resourceestimator/ResourceEstimator.html">Resource Estimator Service</a>
</li>
<li class="none">
<a href="../../hadoop-sls/SchedulerLoadSimulator.html">Scheduler Load Simulator</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Benchmarking.html">Hadoop Benchmarking</a>
</li>
<li class="none">
<a href="../../hadoop-dynamometer/Dynamometer.html">Dynamometer</a>
</li>
</ul>
<h5>Reference</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/release/">Changelog and Release Notes</a>
</li>
<li class="none">
<a href="../../api/index.html">Java API docs</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/UnixShellAPI.html">Unix Shell API</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/Metrics.html">Metrics</a>
</li>
</ul>
<h5>Configuration</h5>
<ul>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/core-default.xml">core-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs/hdfs-default.xml">hdfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-hdfs-rbf/hdfs-rbf-default.xml">hdfs-rbf-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml">mapred-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-yarn/hadoop-yarn-common/yarn-default.xml">yarn-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-kms/kms-default.html">kms-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-hdfs-httpfs/httpfs-default.html">httpfs-default.xml</a>
</li>
<li class="none">
<a href="../../hadoop-project-dist/hadoop-common/DeprecatedProperties.html">Deprecated Properties</a>
</li>
</ul>
<a href="http://maven.apache.org/" title="Built by Maven" class="poweredBy">
<img alt="Built by Maven" src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<div id="bodyColumn">
<div id="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>ResourceManager Restart</h1>
<ul>
<li><a href="#Overview">Overview</a></li>
<li><a href="#Feature">Feature</a></li>
<li><a href="#Configurations">Configurations</a>
<ul>
<li><a href="#Enable_RM_Restart">Enable RM Restart</a></li>
<li><a href="#Configure_the_state-store_for_persisting_the_RM_state">Configure the state-store for persisting the RM state</a></li>
<li><a href="#How_to_choose_the_state-store_implementation">How to choose the state-store implementation</a></li>
<li><a href="#Configurations_for_Hadoop_FileSystem_based_state-store_implementation">Configurations for Hadoop FileSystem based state-store implementation</a></li>
<li><a href="#Configurations_for_ZooKeeper_based_state-store_implementation">Configurations for ZooKeeper based state-store implementation</a></li>
<li><a href="#Configurations_for_LevelDB_based_state-store_implementation">Configurations for LevelDB based state-store implementation</a></li>
<li><a href="#Configurations_for_work-preserving_RM_recovery">Configurations for work-preserving RM recovery</a></li></ul></li>
<li><a href="#Notes">Notes</a></li>
<li><a href="#Sample_Configurations">Sample Configurations</a></li></ul>
<section>
<h2><a name="Overview"></a>Overview</h2>
<p>ResourceManager is the central authority that manages resources and schedules applications running on YARN. Hence, it is potentially a single point of failure in an Apache YARN cluster. This document gives an overview of ResourceManager Restart, a feature that enhances ResourceManager to keep functioning across restarts and also makes ResourceManager down-time invisible to end-users.</p>
<p>There are two types of restart for ResourceManager:</p>
<ul>
<li>
<p><b>Non-work-preserving RM restart</b>: This restart enhances RM to persist application/attempt state and other credentials information in a pluggable state-store. RM will reload this information from state-store on restart and re-kick the previously running applications. Users are not required to re-submit the applications.</p>
</li>
<li>
<p><b>Work-preserving RM restart</b>: This focuses on re-constructing the running state of RM by combining the container status from NodeManagers and container requests from ApplicationMasters on restart. The key difference from Non-work-preserving RM restart is that previously running applications will not be killed after RM restarts, and so applications will not lose its work because of RM outage.</p>
</li>
</ul></section><section>
<h2><a name="Feature"></a>Feature</h2>
<ul>
<li>
<p><b>Non-work-preserving RM restart</b></p>
<p>In non-work-preserving RM restart, RM will save the application metadata (i.e. ApplicationSubmissionContext) in a pluggable state-store when client submits an application and also saves the final status of the application such as the completion state (failed, killed, or finished) and diagnostics when the application completes. Besides, RM also saves the credentials like security keys, tokens to work in a secure environment. When RM shuts down, as long as the required information (i.e.application metadata and the alongside credentials if running in a secure environment) is available in the state-store, then when RM restarts, it can pick up the application metadata from the state-store and re-submit the application. RM won&#x2019;t re-submit the applications if they were already completed (i.e. failed, killed, or finished) before RM went down.</p>
<p>NodeManagers and clients during the down-time of RM will keep polling RM until RM comes up. When RM comes up, it will send a re-sync command to all the NodeManagers and ApplicationMasters it was talking to via heartbeats. The NMs will kill all its managed containers and re-register with RM. These re-registered NodeManagers are similar to the newly joining NMs. AMs (e.g. MapReduce AM) are expected to shutdown when they receive the re-sync command. After RM restarts and loads all the application metadata, credentials from state-store and populates them into memory, it will create a new attempt (i.e. ApplicationMaster) for each application that was not yet completed and re-kick that application as usual. As described before, the previously running applications&#x2019; work is lost in this manner since they are essentially killed by RM via the re-sync command on restart.</p>
</li>
<li>
<p><b>Work-preserving RM restart</b></p>
<p>In work-preserving RM restart, RM ensures the persistency of application state and reload that state on recovery, this restart primarily focuses on re-constructing the entire running state of YARN cluster, the majority of which is the state of the central scheduler inside RM which keeps track of all containers&#x2019; life-cycle, applications&#x2019; headroom and resource requests, queues&#x2019; resource usage and so on. In this way, RM need not kill the AM and re-run the application from scratch as it is done in non-work-preserving RM restart. Applications can simply re-sync back with RM and resume from where it were left off.</p>
<p>RM recovers its running state by taking advantage of the container status sent from all NMs. NM will not kill the containers when it re-syncs with the restarted RM. It continues managing the containers and sends the container status across to RM when it re-registers. RM reconstructs the container instances and the associated applications&#x2019; scheduling status by absorbing these containers&#x2019; information. In the meantime, AM needs to re-send the outstanding resource requests to RM because RM may lose the unfulfilled requests when it shuts down. Application writers using AMRMClient library to communicate with RM do not need to worry about the part of AM re-sending resource requests to RM on re-sync, as it is automatically taken care by the library itself.</p>
</li>
</ul></section><section>
<h2><a name="Configurations"></a>Configurations</h2>
<p>This section describes the configurations involved to enable RM Restart feature.</p><section>
<h3><a name="Enable_RM_Restart"></a>Enable RM Restart</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.recovery.enabled</code> </td>
<td align="left"> <code>true</code> </td></tr>
</tbody>
</table></section><section>
<h3><a name="Configure_the_state-store_for_persisting_the_RM_state"></a>Configure the state-store for persisting the RM state</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.store.class</code> </td>
<td align="left"> The class name of the state-store to be used for saving application/attempt state and the credentials. The available state-store implementations are <code>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</code>, a ZooKeeper based state-store implementation and <code>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</code>, a Hadoop FileSystem based state-store implementation like HDFS and local FS. <code>org.apache.hadoop.yarn.server.resourcemanager.recovery.LeveldbRMStateStore</code>, a LevelDB based state-store implementation. The default value is set to <code>org.apache.hadoop.yarn.server.resourcemanager.recovery.FileSystemRMStateStore</code>. </td></tr>
</tbody>
</table></section><section>
<h3><a name="How_to_choose_the_state-store_implementation"></a>How to choose the state-store implementation</h3>
<ul>
<li>
<p><b>ZooKeeper based state-store</b>: User is free to pick up any storage to set up RM restart, but must use ZooKeeper based state-store to support RM HA. The reason is that only ZooKeeper based state-store supports fencing mechanism to avoid a split-brain situation where multiple RMs assume they are active and can edit the state-store at the same time.</p>
</li>
<li>
<p><b>FileSystem based state-store</b>: HDFS and local FS based state-store are supported. Fencing mechanism is not supported.</p>
</li>
<li>
<p><b>LevelDB based state-store</b>: LevelDB based state-store is considered more light weight than HDFS and ZooKeeper based state-store. LevelDB supports better atomic operations, fewer I/O ops per state update, and far fewer total files on the filesystem. Fencing mechanism is not supported.</p>
</li>
</ul></section><section>
<h3><a name="Configurations_for_Hadoop_FileSystem_based_state-store_implementation"></a>Configurations for Hadoop FileSystem based state-store implementation</h3>
<p>Support both HDFS and local FS based state-store implementation. The type of file system to be used is determined by the scheme of URI. e.g. <code>hdfs://localhost:9000/rmstore</code> uses HDFS as the storage and <code>file:///tmp/yarn/rmstore</code> uses local FS as the storage. If no scheme(<code>hdfs://</code> or <code>file://</code>) is specified in the URI, the type of storage to be used is determined by <code>fs.defaultFS</code> defined in <code>core-site.xml</code>.</p>
<ul>
<li>Configure the URI where the RM state will be saved in the Hadoop FileSystem state-store.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.fs.state-store.uri</code> </td>
<td align="left"> URI pointing to the location of the FileSystem path where RM state will be stored (e.g. <a class="externalLink" href="hdfs://localhost:9000/rmstore">hdfs://localhost:9000/rmstore</a>). Default value is <code>${hadoop.tmp.dir}/yarn/system/rmstore</code>. If FileSystem name is not provided, <code>fs.default.name</code> specified in *<i>conf/core-site.xml</i> will be used. </td></tr>
</tbody>
</table>
<ul>
<li>Configure the retry policy state-store client uses to connect with the Hadoop FileSystem.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.fs.state-store.retry-policy-spec</code> </td>
<td align="left"> Hadoop FileSystem client retry policy specification. Hadoop FileSystem client retry is always enabled. Specified in pairs of sleep-time and number-of-retries i.e. (t0, n0), (t1, n1), &#x2026;, the first n0 retries sleep t0 milliseconds on average, the following n1 retries sleep t1 milliseconds on average, and so on. Default value is (2000, 500) </td></tr>
</tbody>
</table></section><section>
<h3><a name="Configurations_for_ZooKeeper_based_state-store_implementation"></a>Configurations for ZooKeeper based state-store implementation</h3>
<ul>
<li>Configure the ZooKeeper server address and the root path where the RM state is stored.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>hadoop.zk.address</code> </td>
<td align="left"> Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server (e.g. &#x201c;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&#x201d;) to be used by the RM for storing RM state. </td></tr>
<tr class="a">
<td align="left"> <code>yarn.resourcemanager.zk-state-store.parent-path</code> </td>
<td align="left"> The full path of the root znode where RM state will be stored. Default value is /rmstore. </td></tr>
</tbody>
</table>
<ul>
<li>Configure the retry policy state-store client uses to connect with the ZooKeeper server.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>hadoop.zk.num-retries</code> </td>
<td align="left"> Number of times RM tries to connect to ZooKeeper server if the connection is lost. Default value is 500. </td></tr>
<tr class="a">
<td align="left"> <code>hadoop.zk.retry-interval-ms</code> </td>
<td align="left"> The interval in milliseconds between retries when connecting to a ZooKeeper server. Default value is 2 seconds. </td></tr>
<tr class="b">
<td align="left"> <code>hadoop.zk.timeout-ms</code> </td>
<td align="left"> ZooKeeper session timeout in milliseconds. This configuration is used by the ZooKeeper server to determine when the session expires. Session expiration happens when the server does not hear from the client (i.e. no heartbeat) within the session timeout period specified by this configuration. Default value is 10 seconds </td></tr>
</tbody>
</table>
<ul>
<li>Configure the ACLs to be used for setting permissions on ZooKeeper znodes.</li>
</ul>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>hadoop.zk.acl</code> </td>
<td align="left"> ACLs to be used for setting permissions on ZooKeeper znodes. Default value is <code>world:anyone:rwcda</code> </td></tr>
</tbody>
</table></section><section>
<h3><a name="Configurations_for_LevelDB_based_state-store_implementation"></a>Configurations for LevelDB based state-store implementation</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.leveldb-state-store.path</code> </td>
<td align="left"> Local path where the RM state will be stored. Default value is <code>${hadoop.tmp.dir}/yarn/system/rmstore</code> </td></tr>
</tbody>
</table></section><section>
<h3><a name="Configurations_for_work-preserving_RM_recovery"></a>Configurations for work-preserving RM recovery</h3>
<table border="0" class="bodyTable">
<thead>
<tr class="a">
<th align="left"> Property </th>
<th align="left"> Description </th></tr>
</thead><tbody>
<tr class="b">
<td align="left"> <code>yarn.resourcemanager.work-preserving-recovery.scheduling-wait-ms</code> </td>
<td align="left"> Set the amount of time RM waits before allocating new containers on RM work-preserving recovery. Such wait period gives RM a chance to settle down resyncing with NMs in the cluster on recovery, before assigning new containers to applications.</td></tr>
</tbody>
</table></section></section><section>
<h2><a name="Notes"></a>Notes</h2>
<p>ContainerId string format is changed if RM restarts with work-preserving recovery enabled. It used to be such format: <code>Container_{clusterTimestamp}_{appId}_{attemptId}_{containerId}</code>, e.g. <code>Container_1410901177871_0001_01_000005</code>.</p>
<p>It is now changed to: <code>Container_</code><b>e{epoch}</b><code>_{clusterTimestamp}_{appId}_{attemptId}_{containerId}</code>, e.g. <code>Container_</code><b>e17</b><code>_1410901177871_0001_01_000005</code>.</p>
<p>Here, the additional epoch number is a monotonically increasing integer which starts from 0 and is increased by 1 each time RM restarts. If epoch number is 0, it is omitted and the containerId string format stays the same as before.</p></section><section>
<h2><a name="Sample_Configurations"></a>Sample Configurations</h2>
<p>Below is a minimum set of configurations for enabling RM work-preserving restart using ZooKeeper based state store.</p>
<div class="source">
<div class="source">
<pre> &lt;property&gt;
&lt;description&gt;Enable RM to recover state after starting. If true, then
yarn.resourcemanager.store.class must be specified&lt;/description&gt;
&lt;name&gt;yarn.resourcemanager.recovery.enabled&lt;/name&gt;
&lt;value&gt;true&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;description&gt;The class to use as the persistent store.&lt;/description&gt;
&lt;name&gt;yarn.resourcemanager.store.class&lt;/name&gt;
&lt;value&gt;org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore&lt;/value&gt;
&lt;/property&gt;
&lt;property&gt;
&lt;description&gt;Comma separated list of Host:Port pairs. Each corresponds to a ZooKeeper server
(e.g. &quot;127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002&quot;) to be used by the RM for storing RM state.
This must be supplied when using org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
as the value for yarn.resourcemanager.store.class&lt;/description&gt;
&lt;name&gt;hadoop.zk.address&lt;/name&gt;
&lt;value&gt;127.0.0.1:2181&lt;/value&gt;
&lt;/property&gt;
</pre></div></div></section>
</div>
</div>
<div class="clear">
<hr/>
</div>
<div id="footer">
<div class="xright">
&#169; 2008-2023
Apache Software Foundation
- <a href="http://maven.apache.org/privacy-policy.html">Privacy Policy</a>.
Apache Maven, Maven, Apache, the Apache feather logo, and the Apache Maven project logos are trademarks of The Apache Software Foundation.
</div>
<div class="clear">
<hr/>
</div>
</div>
</body>
</html>