<ahref="http://maven.apache.org/"title="Built by Maven"class="poweredBy">
<imgalt="Built by Maven"src="./images/logos/maven-feather.png"/>
</a>
</div>
</div>
<divid="bodyColumn">
<divid="contentBox">
<!---
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->
<h1>Using Memory Control in YARN</h1>
<p>YARN has multiple features to enforce container memory limits. There are three types of controls in YARN that can be used. 1. The polling feature monitors periodically measures container memory usage and kills the containers that exceed their limits. This is a legacy feature with some issues notably a delay that may lead to node shutdown. 2. Strict memory control kills each container that has exceeded its limits. It is using the OOM killer capability of the cgroups Linux kernel feature. 3. Elastic memory control is also based on cgroups. It allows bursting and starts killing containers only, if the overall system memory usage reaches a limit.</p><section>
<p>cgroups can be used to preempt containers in case of out-of-memory events. This feature leverages cgroups to clean up containers with the kernel when this happens. If your container exited with exit code <code>137</code>, then ou can verify the cause in <code>/var/log/messages</code>.</p></section><section>
<p>The cgroups kernel feature has the ability to notify the node manager, if the parent cgroup of all containers specified by <code>yarn.nodemanager.linux-container-executor.cgroups.hierarchy</code> goes over a memory limit. The YARN feature that uses this ability is called elastic memory control. The benefits are that containers can burst using more memory than they are reserved to. This is allowed as long as we do not exceed the overall memory limit. When the limit is reached the kernel freezes all the containers and notifies the node manager. The node manager chooses a container and preempts it. It continues this step until the node is resumed from the OOM condition.</p></section><section>
<h2><aname="The_Limit_for_Elastic_Memory_Control"></a>The Limit for Elastic Memory Control</h2>
<p>The limit is the amount of memory allocated to all the containers on the node. The limit is specified by <code>yarn.nodemanager.resource.memory-mb</code> and <code>yarn.nodemanager.vmem-pmem-ratio</code>. If these are not set, the limit is set based on the available resources. See <code>yarn.nodemanager.resource.detect-hardware-capabilities</code> for details.</p></section><section>
<p>The preemption logic specifies which container to preempt in a node wide out-of-memory situation. The default logic is the <code>DefaultOOMHandler</code>. It picks the latest container that exceeded its memory limit. In the unlikely case that no such container is found, it preempts the container that was launched most recently. This continues until the OOM condition is resolved. This logic supports bursting, when containers use more memory than they reserved as long as we have memory available. This helps to improve the overall cluster utilization. The logic ensures that as long as a container is within its limit, it won’t get preempted. If the container bursts it can be preempted. There is a case that all containers are within their limits but we are out of memory. This can also happen in case of oversubscription. We prefer preemting the latest containers to minimize the cost and value lost. Once preempted, the data in the container is lost.</p>
<p>The default out-of-memory handler can be updated using <code>yarn.nodemanager.elastic-memory-control.oom-handler</code>. The class named in this configuration entry has to implement java.lang.Runnable. The <code>run()</code> function will be called in a node level out-of-memory situation. The constructor should accept an <code>NmContext</code> object.</p></section><section>
<h2><aname="Physical_and_virtual_memory_control"></a>Physical and virtual memory control</h2>
<p>In case of Elastic Memory Control, the limit applies to the physical or virtual (rss+swap in cgroups) memory depending on whether <code>yarn.nodemanager.pmem-check-enabled</code> or <code>yarn.nodemanager.vmem-check-enabled</code> is set.</p>
<p>There is no reason to set them both. If the system runs with swap disabled, both will have the same number. If swap is enabled the virtual memory counter will account for pages in physical memory and on the disk. This is what the application allocated and it has control over. The limit should be applied to the virtual memory in this case. When swapping is enabled, the physical memory is no more than the virtual memory and it is adjusted by the kernel not just by the container. There is no point preempting a container when it exceeds a physical memory limit with swapping. The system will just swap out some memory, when needed.</p></section><section>
<h2><aname="Virtual_memory_measurement_and_swapping"></a>Virtual memory measurement and swapping</h2>
<p>There is a difference between the virtual memory reported by the container monitor and the virtual memory limit specified in the elastic memory control feature. The container monitor uses <code>ProcfsBasedProcessTree</code> by default for measurements that returns values from the <code>proc</code> file system. The virtual memory returned is the size of the address space of all the processes in each container. This includes anonymous pages, pages swapped out to disk, mapped files and reserved pages among others. Reserved pages are not backed by either physical or swapped memory. They can be a large part of the virtual memory usage. The reservabe address space was limited on 32 bit processors but it is very large on 64-bit ones making this metric less useful. Some Java Virtual Machines reserve large amounts of pages but they do not actually use it. This will result in gigabytes of virtual memory usage shown. However, this does not mean that anything is wrong with the container.</p>
<p>Because of this you can now use <code>CGroupsResourceCalculator</code>. This shows only the sum of the physical memory usage and swapped pages as virtual memory usage excluding the reserved address space. This reflects much better what the application and the container allocated.</p>
<p>In order to enable cgroups based resource calculation set <code>yarn.nodemanager.resource-calculator.class</code> to <code>org.apache.hadoop.yarn.server.nodemanager.containermanager.linux.resources.CGroupsResourceCalculator</code>.</p></section><section>
<p>CG: C and E require the following prerequisites: 1. <code>yarn.nodemanager.container-executor.class</code> should be <code>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</code>. 2. <code>yarn.nodemanager.runtime.linux.allowed-runtimes</code> should at least be <code>default</code>. 3. <code>yarn.nodemanager.resource.memory.enabled</code> should be <code>true</code></p></section><section>
<h2><aname="Configuring_no_memory_control"></a>Configuring no memory control</h2>
<p><code>yarn.nodemanager.pmem-check-enabled</code> and <code>yarn.nodemanager.vmem-check-enabled</code> should be <code>false</code>.</p>
<p><code>yarn.nodemanager.resource.memory.enforced</code> should be <code>false</code>.</p>
<p><code>yarn.nodemanager.elastic-memory-control.enabled</code> should be <code>false</code>.</p></section><section>
<h2><aname="Configuring_strict_container_memory_enforcement_with_polling_without_cgroups"></a>Configuring strict container memory enforcement with polling without cgroups</h2>
<p><code>yarn.nodemanager.pmem-check-enabled</code> or <code>yarn.nodemanager.vmem-check-enabled</code> should be <code>true</code>.</p>
<p><code>yarn.nodemanager.resource.memory.enforced</code> should be <code>false</code>.</p>
<p><code>yarn.nodemanager.elastic-memory-control.enabled</code> should be <code>false</code>.</p></section><section>
<h2><aname="Configuring_strict_container_memory_enforcement_with_cgroups"></a>Configuring strict container memory enforcement with cgroups</h2>
<p>Strict memory control preempts containers right away using the OOM killer feature of the kernel, when they reach their physical or virtual memory limits. You need to set the following options on top of the prerequisites above to use strict memory control.</p>
<p>Configure the cgroups prerequisites mentioned above.</p>
<p><code>yarn.nodemanager.pmem-check-enabled</code> or <code>yarn.nodemanager.vmem-check-enabled</code> should be <code>true</code>. You can set them both. <b>Currently this is ignored by the code, only physical limits can be selected.</b></p>
<p><code>yarn.nodemanager.resource.memory.enforced</code> should be true</p></section><section>
<p>The cgroups based elastic memory control preempts containers only if the overall system memory usage reaches its limit allowing bursting. This feature requires setting the following options on top of the prerequisites.</p>
<p>Configure the cgroups prerequisites mentioned above.</p>
<p><code>yarn.nodemanager.elastic-memory-control.enabled</code> should be <code>true</code>.</p>
<p><code>yarn.nodemanager.resource.memory.enforced</code> should be <code>false</code></p>
<p><code>yarn.nodemanager.pmem-check-enabled</code> or <code>yarn.nodemanager.vmem-check-enabled</code> should be <code>true</code>. If swapping is turned off the former should be set, the latter should be set otherwise.</p></section><section>
<h2><aname="Configuring_elastic_memory_control_and_strict_container_memory_enforcement_through_cgroups"></a>Configuring elastic memory control and strict container memory enforcement through cgroups</h2>
<p>ADVANCED ONLY Elastic memory control and strict container memory enforcement can be enabled at the same time to allow Node Manager to over-allocate itself. However, elastic memory control changes how strict container memory enforcement through cgroups is performed. Elastic memory control disables the oom killer on the root yarn container cgroup. The oom killer setting overrides that of individual container cgroups, so individual containers won’t be killed by the oom killer when they go over their memory limit. The strict container memory enforcement in this case falls back to the polling-based mechanism.</p></section>