Fix ordering of bootstrap checks in docs (#32417)

In the section of the bootstrap checks docs for the maximum map count
check, we refer to max size virtual memory check and explicitly call out
the maximum size virtual memory check as being the previous
point. However, this is not correct as the previous point is currently
the max file size check. It does make sense for these two checks to be
proximate to each other in the docs so this commit reorders the checks
so that the maximum size virtual memory check indeed comes before the
maximum map count check. This makes the sense in the maximum map count
check correct.
This commit is contained in:
Jason Tedor 2018-07-27 10:40:16 -04:00 committed by GitHub
parent 7aa5365497
commit 3ac57f0ba3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 13 additions and 13 deletions

View File

@ -118,6 +118,19 @@ least 4096 threads. This can be done via `/etc/security/limits.conf`
using the `nproc` setting (note that you might have to increase the
limits for the `root` user too).
=== Max file size check
The segment files that are the components of individual shards and the translog
generations that are components of the translog can get large (exceeding
multiple gigabytes). On systems where the max size of files that can be created
by the Elasticsearch process is limited, this can lead to failed
writes. Therefore, the safest option here is that the max file size is unlimited
and that is what the max file size bootstrap check enforces. To pass the max
file check, you must configure your system to allow the Elasticsearch process
the ability to write files of unlimited size. This can be done via
`/etc/security/limits.conf` using the `fsize` setting to `unlimited` (note that
you might have to increase the limits for the `root` user too).
[[max-size-virtual-memory-check]]
=== Maximum size virtual memory check
@ -133,19 +146,6 @@ address space. This can be done via `/etc/security/limits.conf` using
the `as` setting to `unlimited` (note that you might have to increase
the limits for the `root` user too).
=== Max file size check
The segment files that are the components of individual shards and the translog
generations that are components of the translog can get large (exceeding
multiple gigabytes). On systems where the max size of files that can be created
by the Elasticsearch process is limited, this can lead to failed
writes. Therefore, the safest option here is that the max file size is unlimited
and that is what the max file size bootstrap check enforces. To pass the max
file check, you must configure your system to allow the Elasticsearch process
the ability to write files of unlimited size. This can be done via
`/etc/security/limits.conf` using the `fsize` setting to `unlimited` (note that
you might have to increase the limits for the `root` user too).
=== Maximum map count check
Continuing from the previous <<max-size-virtual-memory-check,point>>, to