* [DOCS] Combining important config settings into a single page (#63849) * Combining important config settings into a single page. * Updating ids for two pages causing link errors and implementing redirects. * Updating links to use IDs instead of xrefs.
This commit is contained in:
parent
af9e96d681
commit
c28c3422bb
|
@ -163,7 +163,7 @@ is not `0`, a reason for the rejection or failure is included in the response.
|
|||
|
||||
`cluster_name`::
|
||||
(string)
|
||||
Name of the cluster. Based on the <<cluster.name>> setting.
|
||||
Name of the cluster. Based on the <<cluster-name>> setting.
|
||||
|
||||
`nodes`::
|
||||
(object)
|
||||
|
@ -186,7 +186,7 @@ since the {wikipedia}/Unix_time[Unix Epoch].
|
|||
|
||||
`name`::
|
||||
(string)
|
||||
Human-readable identifier for the node. Based on the <<node.name>> setting.
|
||||
Human-readable identifier for the node. Based on the <<node-name>> setting.
|
||||
|
||||
`transport_address`::
|
||||
(string)
|
||||
|
|
|
@ -66,7 +66,7 @@ is not `0`, a reason for the rejection or failure is included in the response.
|
|||
|
||||
`cluster_name`::
|
||||
(string)
|
||||
Name of the cluster, based on the <<cluster.name>> setting.
|
||||
Name of the cluster, based on the <<cluster-name>> setting.
|
||||
|
||||
`cluster_uuid`::
|
||||
(string)
|
||||
|
|
|
@ -15,7 +15,7 @@ The initial set of master-eligible nodes is defined in the
|
|||
set to a list containing one of the following items for each master-eligible
|
||||
node:
|
||||
|
||||
- The <<node.name,node name>> of the node.
|
||||
- The <<node-name,node name>> of the node.
|
||||
- The node's hostname if `node.name` is not set, because `node.name` defaults
|
||||
to the node's hostname. You must use either the fully-qualified hostname or
|
||||
the bare hostname <<modules-discovery-bootstrap-cluster-fqdns,depending on
|
||||
|
@ -44,7 +44,7 @@ WARNING: You must set `cluster.initial_master_nodes` to the same list of nodes
|
|||
on each node on which it is set in order to be sure that only a single cluster
|
||||
forms during bootstrapping and therefore to avoid the risk of data loss.
|
||||
|
||||
For a cluster with 3 master-eligible nodes (with <<node.name,node names>>
|
||||
For a cluster with 3 master-eligible nodes (with <<node-name,node names>>
|
||||
`master-a`, `master-b` and `master-c`) the configuration will look as follows:
|
||||
|
||||
[source,yaml]
|
||||
|
@ -97,7 +97,7 @@ match exactly.
|
|||
[discrete]
|
||||
==== Choosing a cluster name
|
||||
|
||||
The <<cluster.name,`cluster.name`>> setting enables you to create multiple
|
||||
The <<cluster-name,`cluster.name`>> setting enables you to create multiple
|
||||
clusters which are separated from each other. Nodes verify that they agree on
|
||||
their cluster name when they first connect to each other, and Elasticsearch
|
||||
will only form a cluster from nodes that all have the same cluster name. The
|
||||
|
|
|
@ -48,7 +48,7 @@ A node that has the `master` role (default), which makes it eligible to be
|
|||
<<data-node,Data node>>::
|
||||
|
||||
A node that has the `data` role (default). Data nodes hold data and perform data
|
||||
related operations such as CRUD, search, and aggregations. A node with the `data` role can fill any of the specialised data node roles.
|
||||
related operations such as CRUD, search, and aggregations. A node with the `data` role can fill any of the specialised data node roles.
|
||||
|
||||
<<node-ingest-node,Ingest node>>::
|
||||
|
||||
|
@ -456,6 +456,6 @@ directory. This can lead to unexpected data loss.
|
|||
More node settings can be found in <<settings>> and <<important-settings>>,
|
||||
including:
|
||||
|
||||
* <<cluster.name,`cluster.name`>>
|
||||
* <<node.name,`node.name`>>
|
||||
* <<cluster-name,`cluster.name`>>
|
||||
* <<node-name,`node.name`>>
|
||||
* <<modules-network,network settings>>
|
||||
|
|
|
@ -3,6 +3,16 @@
|
|||
|
||||
The following pages have moved or been deleted.
|
||||
|
||||
[role="exclude",id="node.name"]
|
||||
=== Node name setting
|
||||
|
||||
See <<node-name,Node name setting>>.
|
||||
|
||||
[role="exclude",id="cluster.name"]
|
||||
=== Cluster name setting
|
||||
|
||||
See <<cluster-name,Cluster name setting>>.
|
||||
|
||||
[role="exclude",id="ccr-remedy-follower-index"]
|
||||
=== Leader index retaining operations for replication
|
||||
|
||||
|
|
|
@ -22,7 +22,7 @@ is not `0`, a reason for the rejection or failure is included in the response.
|
|||
|
||||
`cluster_name`::
|
||||
(string)
|
||||
Name of the cluster. Based on the <<cluster.name>> setting.
|
||||
Name of the cluster. Based on the <<cluster-name>> setting.
|
||||
|
||||
`nodes`::
|
||||
(object)
|
||||
|
|
|
@ -74,7 +74,7 @@ audited in plain text when including the request body in audit events.
|
|||
// tag::xpack-sa-lf-emit-node-name-tag[]
|
||||
`xpack.security.audit.logfile.emit_node_name`::
|
||||
(<<dynamic-cluster-setting,Dynamic>>)
|
||||
Specifies whether to include the <<node.name,node name>> as a field in
|
||||
Specifies whether to include the <<node-name,node name>> as a field in
|
||||
each audit event. The default value is `false`.
|
||||
// end::xpack-sa-lf-emit-node-name-tag[]
|
||||
|
||||
|
@ -101,7 +101,7 @@ The default value is `false`.
|
|||
Specifies whether to include the node id as a field in each audit event.
|
||||
This is available for the new format only. That is to say, this information
|
||||
does not exist in the `<clustername>_access.log` file.
|
||||
Unlike <<node.name,node name>>, whose value might change if the administrator
|
||||
Unlike <<node-name,node name>>, whose value might change if the administrator
|
||||
changes the setting in the config file, the node id will persist across cluster
|
||||
restarts and the administrator cannot change it.
|
||||
The default value is `true`.
|
||||
|
|
|
@ -7,14 +7,15 @@ settings which need to be considered before going into production.
|
|||
The following settings *must* be considered before going to production:
|
||||
|
||||
* <<path-settings,Path settings>>
|
||||
* <<cluster.name,Cluster name>>
|
||||
* <<node.name,Node name>>
|
||||
* <<network.host,Network host>>
|
||||
* <<cluster-name,Cluster name setting>>
|
||||
* <<node-name,Node name setting>>
|
||||
* <<network.host,Network host settings>>
|
||||
* <<discovery-settings,Discovery settings>>
|
||||
* <<heap-size,Heap size>>
|
||||
* <<heap-dump-path,Heap dump path>>
|
||||
* <<gc-logging,GC logging>>
|
||||
* <<es-tmpdir,Temp directory>>
|
||||
* <<heap-size,Heap size settings>>
|
||||
* <<heap-dump-path,JVM heap dump path setting>>
|
||||
* <<gc-logging,GC logging settings>>
|
||||
* <<es-tmpdir,Temporary directory settings>>
|
||||
* <<error-file-path,JVM fatal error log setting>>
|
||||
|
||||
include::important-settings/path-settings.asciidoc[]
|
||||
|
||||
|
|
|
@ -1,14 +1,15 @@
|
|||
[[cluster.name]]
|
||||
=== `cluster.name`
|
||||
[[cluster-name]]
|
||||
[discrete]
|
||||
=== Cluster name setting
|
||||
|
||||
A node can only join a cluster when it shares its `cluster.name` with all the
|
||||
other nodes in the cluster. The default name is `elasticsearch`, but you should
|
||||
change it to an appropriate name which describes the purpose of the cluster.
|
||||
change it to an appropriate name that describes the purpose of the cluster.
|
||||
|
||||
[source,yaml]
|
||||
--------------------------------------------------
|
||||
cluster.name: logging-prod
|
||||
--------------------------------------------------
|
||||
|
||||
Make sure that you don't reuse the same cluster names in different environments,
|
||||
otherwise you might end up with nodes joining the wrong cluster.
|
||||
IMPORTANT: Do not reuse the same cluster names in different environments.
|
||||
Otherwise, nodes might join the wrong cluster.
|
||||
|
|
|
@ -1,29 +1,28 @@
|
|||
[[discovery-settings]]
|
||||
=== Important discovery and cluster formation settings
|
||||
++++
|
||||
<titleabbrev>Discovery and cluster formation settings</titleabbrev>
|
||||
++++
|
||||
[discrete]
|
||||
=== Discovery and cluster formation settings
|
||||
|
||||
There are two important discovery and cluster formation settings that should be
|
||||
configured before going to production so that nodes in the cluster can discover
|
||||
each other and elect a master node.
|
||||
Configure two important discovery and cluster formation settings before going
|
||||
to production so that nodes in the cluster can discover each other and elect a
|
||||
master node.
|
||||
|
||||
[discrete]
|
||||
[[unicast.hosts]]
|
||||
==== `discovery.seed_hosts`
|
||||
|
||||
Out of the box, without any network configuration, Elasticsearch will bind to
|
||||
the available loopback addresses and will scan local ports 9300 to 9305 to try
|
||||
to connect to other nodes running on the same server. This provides an
|
||||
Out of the box, without any network configuration, {es} will bind to
|
||||
the available loopback addresses and scan local ports `9300` to `9305` to
|
||||
connect with other nodes running on the same server. This behavior provides an
|
||||
auto-clustering experience without having to do any configuration.
|
||||
|
||||
When you want to form a cluster with nodes on other hosts, you should use the
|
||||
<<static-cluster-setting, static>> `discovery.seed_hosts` setting to provide a list of other nodes in the cluster
|
||||
that are master-eligible and likely to be live and contactable in order to seed
|
||||
the <<modules-discovery-hosts-providers,discovery process>>. This setting value
|
||||
should be a YAML sequence or array of the addresses of all the master-eligible
|
||||
When you want to form a cluster with nodes on other hosts, use the
|
||||
<<static-cluster-setting, static>> `discovery.seed_hosts` setting. This setting
|
||||
provides a list of other nodes in the cluster
|
||||
that are master-eligible and likely to be live and contactable to seed
|
||||
the <<modules-discovery-hosts-providers,discovery process>>. This setting
|
||||
accepts a YAML sequence or array of the addresses of all the master-eligible
|
||||
nodes in the cluster. Each address can be either an IP address or a hostname
|
||||
which resolves to one or more IP addresses via DNS.
|
||||
that resolves to one or more IP addresses via DNS.
|
||||
|
||||
[source,yaml]
|
||||
----
|
||||
|
@ -33,9 +32,9 @@ discovery.seed_hosts:
|
|||
- seeds.mydomain.com <2>
|
||||
- [0:0:0:0:0:ffff:c0a8:10c]:9301 <3>
|
||||
----
|
||||
<1> The port is optional and usually defaults to `9300`, but this default can
|
||||
be <<built-in-hosts-providers,overridden>> by certain settings.
|
||||
<2> If a hostname resolves to multiple IP addresses then the node will attempt to
|
||||
<1> The port is optional and defaults to `9300`, but can
|
||||
be <<built-in-hosts-providers,overridden>>.
|
||||
<2> If a hostname resolves to multiple IP addresses, the node will attempt to
|
||||
discover other nodes at all resolved addresses.
|
||||
<3> IPv6 addresses must be enclosed in square brackets.
|
||||
|
||||
|
@ -47,20 +46,22 @@ dynamically.
|
|||
[[initial_master_nodes]]
|
||||
==== `cluster.initial_master_nodes`
|
||||
|
||||
When you start a brand new Elasticsearch cluster for the very first time, there
|
||||
is a <<modules-discovery-bootstrap-cluster,cluster bootstrapping>> step, which
|
||||
determines the set of master-eligible nodes whose votes are counted in the very
|
||||
When you start an {es} cluster for the first time, a
|
||||
<<modules-discovery-bootstrap-cluster,cluster bootstrapping>> step
|
||||
determines the set of master-eligible nodes whose votes are counted in the
|
||||
first election. In <<dev-vs-prod-mode,development mode>>, with no discovery
|
||||
settings configured, this step is automatically performed by the nodes
|
||||
themselves. As this auto-bootstrapping is <<modules-discovery-quorums,inherently
|
||||
unsafe>>, when you start a brand new cluster in <<dev-vs-prod-mode,production
|
||||
mode>>, you must explicitly list the master-eligible nodes whose votes should be
|
||||
counted in the very first election. This list is set using the
|
||||
`cluster.initial_master_nodes` setting.
|
||||
settings configured, this step is performed automatically by the nodes
|
||||
themselves.
|
||||
|
||||
NOTE: You should remove `cluster.initial_master_nodes` setting from the nodes' configuration
|
||||
*once the cluster has successfully formed for the first time*. Do not use this setting when
|
||||
restarting a cluster or adding a new node to an existing cluster.
|
||||
Because auto-bootstrapping is <<modules-discovery-quorums,inherently
|
||||
unsafe>>, when starting a new cluster in production
|
||||
mode, you must explicitly list the master-eligible nodes whose votes should be
|
||||
counted in the very first election. You set this list using the
|
||||
`cluster.initial_master_nodes` setting.
|
||||
|
||||
IMPORTANT: After the cluster forms successfully for the first time, remove the `cluster.initial_master_nodes` setting from each nodes'
|
||||
configuration. Do not use this setting when
|
||||
restarting a cluster or adding a new node to an existing cluster.
|
||||
|
||||
[source,yaml]
|
||||
--------------------------------------------------
|
||||
|
@ -74,14 +75,13 @@ cluster.initial_master_nodes: <1>
|
|||
- master-node-b
|
||||
- master-node-c
|
||||
--------------------------------------------------
|
||||
<1> The initial master nodes should be identified by their
|
||||
<<node.name,`node.name`>>, which defaults to their hostname. Make sure that
|
||||
the value in `cluster.initial_master_nodes` matches the `node.name`
|
||||
exactly. If you use a fully-qualified domain name such as
|
||||
`master-node-a.example.com` for your node names then you must use the
|
||||
fully-qualified name in this list; conversely if `node.name` is a bare
|
||||
hostname without any trailing qualifiers then you must also omit the
|
||||
trailing qualifiers in `cluster.initial_master_nodes`.
|
||||
<1> Identify the initial master nodes by their <<node-name,`node.name`>>, which
|
||||
defaults to their hostname. Ensure that the value in
|
||||
`cluster.initial_master_nodes` matches the `node.name` exactly. If you use a
|
||||
fully-qualified domain name (FQDN) such as `master-node-a.example.com` for your
|
||||
node names, then you must use the FQDN in this list. Conversely, if `node.name`
|
||||
is a bare hostname without any trailing qualifiers, you must also omit the
|
||||
trailing qualifiers in `cluster.initial_master_nodes`.
|
||||
|
||||
For more information, see <<modules-discovery-bootstrap-cluster>> and
|
||||
<<modules-discovery-settings>>.
|
||||
See <<modules-discovery-bootstrap-cluster,bootstrapping a cluster>> and
|
||||
<<modules-discovery-settings,discovery and cluster formation settings>>.
|
||||
|
|
|
@ -1,12 +1,12 @@
|
|||
[[error-file-path]]
|
||||
=== JVM fatal error logs
|
||||
[discrete]
|
||||
=== JVM fatal error log setting
|
||||
|
||||
By default, Elasticsearch configures the JVM to write fatal error logs
|
||||
to the default logging directory (this is `/var/log/elasticsearch` for
|
||||
the <<rpm,RPM>> and <<deb,Debian>> package distributions, and the `logs`
|
||||
directory under the root of the Elasticsearch installation for the
|
||||
<<targz,tar>> and <<zip-windows,zip>> archive distributions). These are logs
|
||||
produced by the JVM when it encounters a fatal error (e.g., a
|
||||
segmentation fault). If this path is not suitable for receiving logs,
|
||||
you should modify the entry `-XX:ErrorFile=...` in
|
||||
<<jvm-options,`jvm.options`>> to an alternate path.
|
||||
By default, {es} configures the JVM to write fatal error logs
|
||||
to the default logging directory. On <<rpm,RPM>> and <<deb,Debian>> packages,
|
||||
this directory is `/var/log/elasticsearch`. On <<targz,Linux and MacOS>> and <<zip-windows,Windows>> distributions, the `logs`
|
||||
directory is located under the root of the {es} installation.
|
||||
|
||||
These are logs produced by the JVM when it encounters a fatal error, such as a
|
||||
segmentation fault. If this path is not suitable for receiving logs,
|
||||
modify the `-XX:ErrorFile=...` entry in <<jvm-options,`jvm.options`>>.
|
||||
|
|
|
@ -1,23 +1,25 @@
|
|||
[[es-tmpdir]]
|
||||
=== Temp directory
|
||||
[discrete]
|
||||
=== Temporary directory settings
|
||||
|
||||
By default, Elasticsearch uses a private temporary directory that the startup
|
||||
By default, {es} uses a private temporary directory that the startup
|
||||
script creates immediately below the system temporary directory.
|
||||
|
||||
On some Linux distributions a system utility will clean files and directories
|
||||
from `/tmp` if they have not been recently accessed. This can lead to the
|
||||
private temporary directory being removed while Elasticsearch is running if
|
||||
On some Linux distributions, a system utility will clean files and directories
|
||||
from `/tmp` if they have not been recently accessed. This behavior can lead to
|
||||
the private temporary directory being removed while {es} is running if
|
||||
features that require the temporary directory are not used for a long time.
|
||||
This causes problems if a feature that requires the temporary directory is
|
||||
subsequently used.
|
||||
Removing the private temporary directory causes problems if a feature that
|
||||
requires this directory is subsequently used.
|
||||
|
||||
If you install Elasticsearch using the `.deb` or `.rpm` packages and run it
|
||||
under `systemd` then the private temporary directory that Elasticsearch uses
|
||||
If you install {es} using the `.deb` or `.rpm` packages and run it
|
||||
under `systemd`, the private temporary directory that {es} uses
|
||||
is excluded from periodic cleanup.
|
||||
|
||||
However, if you intend to run the `.tar.gz` distribution on Linux for an
|
||||
extended period then you should consider creating a dedicated temporary
|
||||
directory for Elasticsearch that is not under a path that will have old files
|
||||
If you intend to run the `.tar.gz` distribution on Linux or MacOS for
|
||||
an extended period, consider creating a dedicated temporary
|
||||
directory for {es} that is not under a path that will have old files
|
||||
and directories cleaned from it. This directory should have permissions set
|
||||
so that only the user that Elasticsearch runs as can access it. Then set the
|
||||
`$ES_TMPDIR` environment variable to point to it before starting Elasticsearch.
|
||||
so that only the user that {es} runs as can access it. Then, set the
|
||||
`$ES_TMPDIR` environment variable to point to this directory before starting
|
||||
{es}.
|
||||
|
|
|
@ -1,7 +1,8 @@
|
|||
[[gc-logging]]
|
||||
=== GC logging
|
||||
[discrete]
|
||||
=== GC logging settings
|
||||
|
||||
By default, {es} enables GC logs. These are configured in
|
||||
By default, {es} enables garbage collection (GC) logs. These are configured in
|
||||
<<jvm-options,`jvm.options`>> and output to the same default location as
|
||||
the {es} logs. The default configuration rotates the logs every 64 MB and
|
||||
can consume up to 2 GB of disk space.
|
||||
|
@ -19,14 +20,16 @@ To see further options not contained in the original JEP, see
|
|||
https://docs.oracle.com/en/java/javase/13/docs/specs/man/java.html#enable-logging-with-the-jvm-unified-logging-framework[Enable
|
||||
Logging with the JVM Unified Logging Framework].
|
||||
|
||||
[[gc-logging-examples]]
|
||||
[discrete]
|
||||
==== Examples
|
||||
|
||||
* Change the default GC log output location to `/opt/my-app/gc.log` by
|
||||
Change the default GC log output location to `/opt/my-app/gc.log` by
|
||||
creating `$ES_HOME/config/jvm.options.d/gc.options` with some sample
|
||||
options:
|
||||
+
|
||||
|
||||
[source,shell]
|
||||
--------------------------------------------
|
||||
----
|
||||
# Turn off all previous logging configuratons
|
||||
-Xlog:disable
|
||||
|
||||
|
@ -35,15 +38,15 @@ Logging with the JVM Unified Logging Framework].
|
|||
|
||||
# Enable GC logging to a custom location with a variety of options
|
||||
-Xlog:gc*,gc+age=trace,safepoint:file=/opt/my-app/gc.log:utctime,pid,tags:filecount=32,filesize=64m
|
||||
--------------------------------------------
|
||||
----
|
||||
|
||||
* Configure an {es} <<docker,Docker container>> to send GC debug logs to
|
||||
Configure an {es} <<docker,Docker container>> to send GC debug logs to
|
||||
standard error (`stderr`). This lets the container orchestrator
|
||||
handle the output. If using the `ES_JAVA_OPTS` environment variable,
|
||||
specify:
|
||||
+
|
||||
|
||||
[source,sh]
|
||||
--------------------------------------------
|
||||
----
|
||||
MY_OPTS="-Xlog:disable -Xlog:all=warning:stderr:utctime,level,tags -Xlog:gc=debug:stderr:utctime"
|
||||
docker run -e ES_JAVA_OPTS="$MY_OPTS" # etc
|
||||
--------------------------------------------
|
||||
----
|
||||
|
|
|
@ -1,15 +1,18 @@
|
|||
[[heap-dump-path]]
|
||||
=== JVM heap dump path
|
||||
[discrete]
|
||||
=== JVM heap dump path setting
|
||||
|
||||
By default, Elasticsearch configures the JVM to dump the heap on out of
|
||||
memory exceptions to the default data directory (this is
|
||||
`/var/lib/elasticsearch` for the <<rpm,RPM>> and <<deb,Debian>> package
|
||||
distributions, and the `data` directory under the root of the
|
||||
Elasticsearch installation for the <<targz,tar>> and <<zip-windows,zip>> archive
|
||||
distributions). If this path is not suitable for receiving heap dumps,
|
||||
you should modify the entry `-XX:HeapDumpPath=...` in
|
||||
<<jvm-options,`jvm.options`>>. If you specify a directory, the JVM
|
||||
will generate a filename for the heap dump based on the PID of the running
|
||||
instance. If you specify a fixed filename instead of a directory, the file must
|
||||
By default, {es} configures the JVM to dump the heap on out of
|
||||
memory exceptions to the default data directory. On <<rpm,RPM>> and
|
||||
<<deb,Debian>> packages, the data directory is `/var/lib/elasticsearch`. On
|
||||
<<targz,Linux and MacOS>> and <<zip-windows,Windows>> distributions,
|
||||
the `data` directory is located under the root of the {es} installation.
|
||||
|
||||
If this path is not suitable for receiving heap dumps, modify the
|
||||
`-XX:HeapDumpPath=...` entry in <<jvm-options,`jvm.options`>>:
|
||||
|
||||
* If you specify a directory, the JVM will generate a filename for the heap
|
||||
dump based on the PID of the running instance.
|
||||
* If you specify a fixed filename instead of a directory, the file must
|
||||
not exist when the JVM needs to perform a heap dump on an out of memory
|
||||
exception, otherwise the heap dump will fail.
|
||||
exception. Otherwise, the heap dump will fail.
|
||||
|
|
|
@ -1,13 +1,14 @@
|
|||
[[heap-size]]
|
||||
=== Setting the heap size
|
||||
[discrete]
|
||||
=== Heap size settings
|
||||
|
||||
By default, Elasticsearch tells the JVM to use a heap with a minimum and maximum
|
||||
By default, {es} tells the JVM to use a heap with a minimum and maximum
|
||||
size of 1 GB. When moving to production, it is important to configure heap size
|
||||
to ensure that Elasticsearch has enough heap available.
|
||||
to ensure that {es} has enough heap available.
|
||||
|
||||
Elasticsearch will assign the entire heap specified in
|
||||
{es} will assign the entire heap specified in
|
||||
<<jvm-options,jvm.options>> via the `Xms` (minimum heap size) and `Xmx` (maximum
|
||||
heap size) settings. You should set these two settings to be equal to each
|
||||
heap size) settings. You should set these two settings to equal each
|
||||
other.
|
||||
|
||||
The value for these settings depends on the amount of RAM available on your
|
||||
|
@ -22,27 +23,33 @@ server:
|
|||
configured with the `Xmx` setting.
|
||||
|
||||
* Set `Xmx` and `Xms` to no more than the threshold that the JVM uses for
|
||||
compressed object pointers (compressed oops); the exact threshold varies but
|
||||
is near 32 GB. You can verify that you are under the threshold by looking for a
|
||||
line in the logs like the following:
|
||||
compressed object pointers (compressed oops). The exact threshold varies but
|
||||
is near 32 GB. You can verify that you are under the threshold by looking for a line in the logs like the following:
|
||||
+
|
||||
heap size [1.9gb], compressed ordinary object pointers [true]
|
||||
[source,txt]
|
||||
----
|
||||
heap size [1.9gb], compressed ordinary object pointers [true]
|
||||
----
|
||||
|
||||
* Ideally set `Xmx` and `Xms` to no more than the threshold for zero-based
|
||||
compressed oops; the exact threshold varies but 26 GB is safe on most
|
||||
systems, but can be as large as 30 GB on some systems. You can verify that
|
||||
* Set `Xmx` and `Xms` to no more than the threshold for zero-based
|
||||
compressed oops. The exact threshold varies but 26 GB is safe on most
|
||||
systems and can be as large as 30 GB on some systems. You can verify that
|
||||
you are under this threshold by starting {es} with the JVM options
|
||||
`-XX:+UnlockDiagnosticVMOptions -XX:+PrintCompressedOopsMode` and looking for
|
||||
a line like the following:
|
||||
+
|
||||
--
|
||||
heap address: 0x000000011be00000, size: 27648 MB, zero based Compressed Oops
|
||||
|
||||
showing that zero-based compressed oops are enabled. If zero-based compressed
|
||||
oops are not enabled then you will see a line like the following instead:
|
||||
|
||||
heap address: 0x0000000118400000, size: 28672 MB, Compressed Oops with base: 0x00000001183ff000
|
||||
--
|
||||
[source,txt]
|
||||
----
|
||||
heap address: 0x000000011be00000, size: 27648 MB, zero based Compressed Oops
|
||||
----
|
||||
+
|
||||
This line shows that zero-based compressed oops are enabled. If zero-based
|
||||
compressed oops are not enabled, you'll see a line like the following instead:
|
||||
+
|
||||
[source,txt]
|
||||
----
|
||||
heap address: 0x0000000118400000, size: 28672 MB, Compressed Oops with base: 0x00000001183ff000
|
||||
----
|
||||
|
||||
The more heap available to {es}, the more memory it can use for its internal
|
||||
caches, but the less memory it leaves available for the operating system to use
|
||||
|
@ -59,8 +66,7 @@ Here is an example of how to set the heap size via a `jvm.options.d/` file:
|
|||
<1> Set the minimum heap size to 2g.
|
||||
<2> Set the maximum heap size to 2g.
|
||||
|
||||
It is also possible to set the heap size via an environment variable. This can
|
||||
be done by setting these values via `ES_JAVA_OPTS`:
|
||||
You can set the heap size using the `ES_JAVA_OPTS` environment variable:
|
||||
|
||||
[source,sh]
|
||||
------------------
|
||||
|
|
|
@ -1,15 +1,17 @@
|
|||
[[network.host]]
|
||||
=== `network.host`
|
||||
[discrete]
|
||||
=== Network host setting
|
||||
|
||||
By default, Elasticsearch binds to loopback addresses only -- e.g. `127.0.0.1`
|
||||
and `[::1]`. This is sufficient to run a single development node on a server.
|
||||
By default, {es} binds to loopback addresses only such as `127.0.0.1`
|
||||
and `[::1]`. This binding is sufficient to run a single development node on a
|
||||
server.
|
||||
|
||||
TIP: In fact, more than one node can be started from the same `$ES_HOME`
|
||||
location on a single node. This can be useful for testing Elasticsearch's
|
||||
TIP: more than one node can be started from the same `$ES_HOME`
|
||||
location on a single node. This setup can be useful for testing {es}'s
|
||||
ability to form clusters, but it is not a configuration recommended for
|
||||
production.
|
||||
|
||||
In order to form a cluster with nodes on other servers, your
|
||||
To form a cluster with nodes on other servers, your
|
||||
node will need to bind to a non-loopback address. While there are many
|
||||
<<modules-network,network settings>>, usually all you need to configure is
|
||||
`network.host`:
|
||||
|
@ -20,10 +22,10 @@ network.host: 192.168.1.10
|
|||
--------------------------------------------------
|
||||
|
||||
The `network.host` setting also understands some special values such as
|
||||
`_local_`, `_site_`, `_global_` and modifiers like `:ip4` and `:ip6`, details of
|
||||
which can be found in <<network-interface-values>>.
|
||||
`_local_`, `_site_`, `_global_` and modifiers like `:ip4` and `:ip6`. See
|
||||
<<network-interface-values>>.
|
||||
|
||||
IMPORTANT: As soon as you provide a custom setting for `network.host`,
|
||||
Elasticsearch assumes that you are moving from development mode to production
|
||||
IMPORTANT: When you provide a custom setting for `network.host`,
|
||||
{es} assumes that you are moving from development mode to production
|
||||
mode, and upgrades a number of system startup checks from warnings to
|
||||
exceptions. See <<dev-vs-prod>> for more information.
|
||||
exceptions. See the differences between <<dev-vs-prod,development and production modes>>.
|
||||
|
|
|
@ -1,11 +1,12 @@
|
|||
[[node.name]]
|
||||
=== `node.name`
|
||||
[[node-name]]
|
||||
[discrete]
|
||||
=== Node name setting
|
||||
|
||||
Elasticsearch uses `node.name` as a human readable identifier for a
|
||||
particular instance of Elasticsearch so it is included in the response
|
||||
of many APIs. It defaults to the hostname that the machine has when
|
||||
Elasticsearch starts but can be configured explicitly in
|
||||
`elasticsearch.yml` as follows:
|
||||
{es} uses `node.name` as a human-readable identifier for a
|
||||
particular instance of {es}. This name is included in the response
|
||||
of many APIs. The node name defaults to the hostname of the machine when
|
||||
{es} starts, but can be configured explicitly in
|
||||
`elasticsearch.yml`:
|
||||
|
||||
[source,yaml]
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -1,13 +1,14 @@
|
|||
[[path-settings]]
|
||||
=== `path.data` and `path.logs`
|
||||
[discrete]
|
||||
=== Path settings
|
||||
|
||||
If you are using the `.zip` or `.tar.gz` archives, the `data` and `logs`
|
||||
directories are sub-folders of `$ES_HOME`. If these important folders are left
|
||||
directories are sub-folders of `$ES_HOME`. If these important folders are left
|
||||
in their default locations, there is a high risk of them being deleted while
|
||||
upgrading Elasticsearch to a new version.
|
||||
upgrading {es} to a new version.
|
||||
|
||||
In production use, you will almost certainly want to change the locations of the
|
||||
data and log folder:
|
||||
`path.data` and `path.logs` folders:
|
||||
|
||||
[source,yaml]
|
||||
--------------------------------------------------
|
||||
|
@ -19,8 +20,8 @@ path:
|
|||
The RPM and Debian distributions already use custom paths for `data` and `logs`.
|
||||
|
||||
The `path.data` settings can be set to multiple paths, in which case all paths
|
||||
will be used to store data (although the files belonging to a single shard will
|
||||
all be stored on the same data path):
|
||||
will be used to store data. However, the files belonging to a single shard will
|
||||
all be stored on the same data path:
|
||||
|
||||
[source,yaml]
|
||||
--------------------------------------------------
|
||||
|
@ -29,4 +30,4 @@ path:
|
|||
- /mnt/elasticsearch_1
|
||||
- /mnt/elasticsearch_2
|
||||
- /mnt/elasticsearch_3
|
||||
--------------------------------------------------
|
||||
--------------------------------------------------
|
||||
|
|
|
@ -88,7 +88,7 @@ The key is the ID of the node.
|
|||
(string)
|
||||
Human-readable name for the node.
|
||||
+
|
||||
You can set this name using the <<node.name,`node.name`>> property in
|
||||
You can set this name using the <<node-name,`node.name`>> property in
|
||||
`elasticsearch.yml`. Defaults to the machine's hostname.
|
||||
=====
|
||||
====
|
||||
|
|
|
@ -10,12 +10,12 @@ IMPORTANT: When you enable {es} {security-features}, unless you have a trial
|
|||
license, you must use Transport Layer Security (TLS) to encrypt internode
|
||||
communication. By following the steps in this tutorial tutorial, you learn how
|
||||
to meet the minimum requirements to pass the
|
||||
{ref}/bootstrap-checks-xpack.html#bootstrap-checks-tls[TLS bootstrap check].
|
||||
<<bootstrap-checks-tls,TLS bootstrap check>>.
|
||||
|
||||
. (Optional) Name the cluster.
|
||||
+
|
||||
--
|
||||
For example, add the {ref}/cluster.name.html[cluster.name] setting in the
|
||||
For example, add the <<cluster-name,cluster name>> setting in the
|
||||
`ES_PATH_CONF/elasticsearch.yml` file:
|
||||
|
||||
[source,yaml]
|
||||
|
@ -36,7 +36,7 @@ however, to ensure that your nodes join the right cluster.
|
|||
. (Optional) Name the {es} node.
|
||||
+
|
||||
--
|
||||
For example, add the {ref}/node.name.html[node.name] setting in the
|
||||
For example, add the <<node-name,node name>> setting in the
|
||||
`ES_PATH_CONF/elasticsearch.yml` file:
|
||||
|
||||
[source,yaml]
|
||||
|
@ -80,8 +80,8 @@ TIP: If you are starting a cluster with multiple master-eligible nodes for the
|
|||
first time, add all of those node names to the `cluster.initial_master_nodes`
|
||||
setting.
|
||||
|
||||
See {ref}/modules-discovery-bootstrap-cluster.html[Bootstrapping a cluster] and
|
||||
{ref}/discovery-settings.html[Important discovery and cluster formation settings].
|
||||
See <<modules-discovery-bootstrap-cluster,bootstrapping a cluster>> and
|
||||
<<discovery-settings,discovery and cluster formation settings>>.
|
||||
--
|
||||
|
||||
. Enable Transport Layer Security (TLS/SSL) for transport (internode)
|
||||
|
@ -108,8 +108,8 @@ used as both a keystore and a truststore. If you use other tools to manage and
|
|||
generate your certificates, you might have different values for these settings,
|
||||
but that scenario is not covered in this tutorial.
|
||||
|
||||
For more information, see <<get-started-enable-security>> and
|
||||
{ref}/security-settings.html#transport-tls-ssl-settings[Transport TLS settings].
|
||||
For more information, see <<get-started-enable-security,enable {es} security features>> and
|
||||
<<transport-tls-ssl-settings,transport TLS settings>>.
|
||||
--
|
||||
|
||||
. Store the password for the PKCS#12 file in the {es} keystore.
|
||||
|
@ -134,7 +134,7 @@ file. We are using this file for both the transport TLS keystore and truststore,
|
|||
therefore supply the same password for both of these settings.
|
||||
--
|
||||
|
||||
. {ref}/starting-elasticsearch.html[Start {es}].
|
||||
. <<starting-elasticsearch,Start {es}>>.
|
||||
+
|
||||
--
|
||||
For example, if you installed {es} with a `.tar.gz` package, run the following
|
||||
|
|
Loading…
Reference in New Issue