[DOCS] Rewrote the memory settings section on the configuration page

This commit is contained in:
Clinton Gormley 2014-05-14 16:01:25 +02:00
parent fc2ab0909e
commit 8f0991c14f
1 changed files with 63 additions and 19 deletions

View File

@ -53,35 +53,79 @@ curl localhost:9200/_nodes/process?pretty
==== Memory Settings ==== Memory Settings
The Linux kernel tries to use as much memory as possible for file system The Linux kernel tries to use as much memory as possible for file system
caches, and eagerly swaps out unused application memory, possibly resulting caches and eagerly swaps out unused application memory, possibly resulting
in the elasticsearch process being swapped out and suffer from low in the elasticsearch process being swapped. Swapping is very bad for
performance then. performance and for node stability, so it should be avoided at all costs.
The first option is to ensure that the sysctl value `vm.swappiness` is set There are three options:
* **Disable swap**
+
--
The simplest option is to completely disable swap. Usually Elasticsearch
is the only service running on a box, and its memory usage is controlled
by the `ES_HEAP_SIZE` environment variable. There should be no need
to have swap enabled. On Linux systems, you can disable swap temporarily
by running: `sudo swapoff -a`. To disable it permanently, you will need
to edit the `/etc/fstab` file and comment out any lines that contain the
word `swap`.
--
* **Configure `swappiness`**
+
--
The second option is to ensure that the sysctl value `vm.swappiness` is set
to `0`. This reduces the kernel's tendency to swap and should not lead to to `0`. This reduces the kernel's tendency to swap and should not lead to
swapping under normal circumstances, while still allowing the whole system swapping under normal circumstances, while still allowing the whole system
to swap in emergency conditions. to swap in emergency conditions.
In the above case there is an option to use NOTE: From kernel version 3.5-rc1 and above, a `swappiness` of `0` will
cause the OOM killer to kill the process instead of allowing swapping.
You will need to set `swappiness` to `1` to still allow swapping in
emergencies.
--
* **`mlockall`**
+
--
The third option on Linux/Unix systems only, is to use
http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html[mlockall] to http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html[mlockall] to
try to lock the process address space so it won't be swapped by configuring try to lock the process address space into RAM, preventing any Elasticsearch
the `bootstrap.mlockall` setting to `true`. memory from being swapped out. This can be done, by adding this line
Note: This option is only available on Linux/Unix operating systems. to the `config/elasticsearch.yml` file:
After starting Elasticsearch can check the process part of the nodes [source,yaml]
info API to find out if the `mlockall` setting has been applied successfully. --------------
bootstrap.mlockall: true
--------------
If the setting has not been applied, you can try to execute After starting Elasticsearch, you can see whether this setting was applied
`ulimit -l unlimited` to allow memory locking. successfully by checking the value of `mlockall` in the output from this
request:
Another possible reason why `mlockall` can fail is when the directory pointed [source,sh]
to by the `java.io.tmpdir` JVM system property - typically `/tmp` - is mounted --------------
with the `noexec` option. In this case you can specify an additional directory curl http://localhost:9200/_nodes/process?pretty
using the `jna.tmpdir` system property to use for loading the native library. --------------
Note, that `mlockall` might cause the JVM or shell session to exit if it fails If you see that `mlockall` is `false`, then it means that the the `mlockall`
to allocate the memory, possibly due to not enough memory being available on request has failed. The most probable reason is that the user running
the machine. Elasticsearch doesn't have permission to lock memory. This can be granted
by running `ulimit -l unlimited` as `root` before starting Elasticsearch.
Another possible reason why `mlockall` can fail is that the temporary directory
(usually `/tmp`) is mounted with the `noexec` option. This can be solved by
specfying a new temp directory, by starting Elasticsearch with:
[source,sh]
--------------
./bin/elasticsearch -Djna.tmpdir=/path/to/new/dir
--------------
WARNING: `mlockall` might cause the JVM or shell session to exit if it tries
to allocate more memory than is available!
--
[float] [float]
[[settings]] [[settings]]