[DOCS] Rewrote the memory settings section on the configuration page
This commit is contained in:
parent
fc2ab0909e
commit
8f0991c14f
|
@ -53,35 +53,79 @@ curl localhost:9200/_nodes/process?pretty
|
|||
==== Memory Settings
|
||||
|
||||
The Linux kernel tries to use as much memory as possible for file system
|
||||
caches, and eagerly swaps out unused application memory, possibly resulting
|
||||
in the elasticsearch process being swapped out and suffer from low
|
||||
performance then.
|
||||
caches and eagerly swaps out unused application memory, possibly resulting
|
||||
in the elasticsearch process being swapped. Swapping is very bad for
|
||||
performance and for node stability, so it should be avoided at all costs.
|
||||
|
||||
The first option is to ensure that the sysctl value `vm.swappiness` is set
|
||||
There are three options:
|
||||
|
||||
* **Disable swap**
|
||||
+
|
||||
--
|
||||
|
||||
The simplest option is to completely disable swap. Usually Elasticsearch
|
||||
is the only service running on a box, and its memory usage is controlled
|
||||
by the `ES_HEAP_SIZE` environment variable. There should be no need
|
||||
to have swap enabled. On Linux systems, you can disable swap temporarily
|
||||
by running: `sudo swapoff -a`. To disable it permanently, you will need
|
||||
to edit the `/etc/fstab` file and comment out any lines that contain the
|
||||
word `swap`.
|
||||
--
|
||||
|
||||
* **Configure `swappiness`**
|
||||
+
|
||||
--
|
||||
The second option is to ensure that the sysctl value `vm.swappiness` is set
|
||||
to `0`. This reduces the kernel's tendency to swap and should not lead to
|
||||
swapping under normal circumstances, while still allowing the whole system
|
||||
to swap in emergency conditions.
|
||||
|
||||
In the above case there is an option to use
|
||||
NOTE: From kernel version 3.5-rc1 and above, a `swappiness` of `0` will
|
||||
cause the OOM killer to kill the process instead of allowing swapping.
|
||||
You will need to set `swappiness` to `1` to still allow swapping in
|
||||
emergencies.
|
||||
--
|
||||
|
||||
* **`mlockall`**
|
||||
+
|
||||
--
|
||||
The third option on Linux/Unix systems only, is to use
|
||||
http://opengroup.org/onlinepubs/007908799/xsh/mlockall.html[mlockall] to
|
||||
try to lock the process address space so it won't be swapped by configuring
|
||||
the `bootstrap.mlockall` setting to `true`.
|
||||
Note: This option is only available on Linux/Unix operating systems.
|
||||
try to lock the process address space into RAM, preventing any Elasticsearch
|
||||
memory from being swapped out. This can be done, by adding this line
|
||||
to the `config/elasticsearch.yml` file:
|
||||
|
||||
After starting Elasticsearch can check the process part of the nodes
|
||||
info API to find out if the `mlockall` setting has been applied successfully.
|
||||
[source,yaml]
|
||||
--------------
|
||||
bootstrap.mlockall: true
|
||||
--------------
|
||||
|
||||
If the setting has not been applied, you can try to execute
|
||||
`ulimit -l unlimited` to allow memory locking.
|
||||
After starting Elasticsearch, you can see whether this setting was applied
|
||||
successfully by checking the value of `mlockall` in the output from this
|
||||
request:
|
||||
|
||||
Another possible reason why `mlockall` can fail is when the directory pointed
|
||||
to by the `java.io.tmpdir` JVM system property - typically `/tmp` - is mounted
|
||||
with the `noexec` option. In this case you can specify an additional directory
|
||||
using the `jna.tmpdir` system property to use for loading the native library.
|
||||
[source,sh]
|
||||
--------------
|
||||
curl http://localhost:9200/_nodes/process?pretty
|
||||
--------------
|
||||
|
||||
Note, that `mlockall` might cause the JVM or shell session to exit if it fails
|
||||
to allocate the memory, possibly due to not enough memory being available on
|
||||
the machine.
|
||||
If you see that `mlockall` is `false`, then it means that the the `mlockall`
|
||||
request has failed. The most probable reason is that the user running
|
||||
Elasticsearch doesn't have permission to lock memory. This can be granted
|
||||
by running `ulimit -l unlimited` as `root` before starting Elasticsearch.
|
||||
|
||||
Another possible reason why `mlockall` can fail is that the temporary directory
|
||||
(usually `/tmp`) is mounted with the `noexec` option. This can be solved by
|
||||
specfying a new temp directory, by starting Elasticsearch with:
|
||||
|
||||
[source,sh]
|
||||
--------------
|
||||
./bin/elasticsearch -Djna.tmpdir=/path/to/new/dir
|
||||
--------------
|
||||
|
||||
WARNING: `mlockall` might cause the JVM or shell session to exit if it tries
|
||||
to allocate more memory than is available!
|
||||
--
|
||||
|
||||
[float]
|
||||
[[settings]]
|
||||
|
|
Loading…
Reference in New Issue