[DOCS] Relocate discovery module content (#56611) (#57454)

* Moves `Discovery and cluster formation` content from `Modules` to
`Set up Elasticsearch`.

* Combines `Adding and removing nodes` with `Adding nodes to your
  cluster`. Adds related redirect.

* Removes and redirects the `Modules` page.

* Rewrites parts of `Discovery and cluster formation` to remove `module`
  references and meta references to the section.
This commit is contained in:
James Rodewig 2020-06-01 14:13:13 -04:00 committed by GitHub
parent 6ccdceec79
commit cde5b7d2b3
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 188 additions and 177 deletions

View File

@ -32,8 +32,6 @@ include::mapping.asciidoc[]
include::analysis.asciidoc[]
include::modules.asciidoc[]
include::index-modules.asciidoc[]
include::ingest.asciidoc[]

View File

@ -1,27 +0,0 @@
[[modules]]
= Modules
[partintro]
--
This section contains modules responsible for various aspects of the functionality in Elasticsearch. Each module has settings which may be:
_static_::
These settings must be set at the node level, either in the
`elasticsearch.yml` file, or as an environment variable or on the command line
when starting a node. They must be set on every relevant node in the cluster.
_dynamic_::
These settings can be dynamically updated on a live cluster with the
<<cluster-update-settings,cluster-update-settings>> API.
The modules in this section are:
<<modules-discovery,Discovery and cluster formation>>::
How nodes discover each other, elect a master and form a cluster.
--
include::modules/discovery.asciidoc[]

View File

@ -1,11 +1,13 @@
[[modules-discovery]]
== Discovery and cluster formation
The discovery and cluster formation module is responsible for discovering
The discovery and cluster formation processes are responsible for discovering
nodes, electing a master, forming a cluster, and publishing the cluster state
each time it changes. It is integrated with other modules. For example, all
communication between nodes is done using the <<modules-transport,transport>>
module. This module is divided into the following sections:
each time it changes. All communication between nodes is done using the
<<modules-transport,transport>> layer.
The following processes and settings are part of discovery and cluster
formation:
<<modules-discovery-hosts-providers>>::
@ -15,17 +17,17 @@ module. This module is divided into the following sections:
<<modules-discovery-quorums>>::
This section describes how {es} uses a quorum-based voting mechanism to
How {es} uses a quorum-based voting mechanism to
make decisions even if some nodes are unavailable.
<<modules-discovery-voting>>::
This section describes the concept of voting configurations, which {es}
automatically updates as nodes leave and join the cluster.
How {es} automatically updates voting configurations as nodes leave and join
a cluster.
<<modules-discovery-bootstrap-cluster>>::
Bootstrapping a cluster is required when an Elasticsearch cluster starts up
Bootstrapping a cluster is required when an {es} cluster starts up
for the very first time. In <<dev-vs-prod-mode,development mode>>, with no
discovery settings configured, this is automatically performed by the nodes
themselves. As this auto-bootstrapping is
@ -67,10 +69,6 @@ include::discovery/voting.asciidoc[]
include::discovery/bootstrapping.asciidoc[]
include::discovery/adding-removing-nodes.asciidoc[]
include::discovery/publishing.asciidoc[]
include::discovery/fault-detection.asciidoc[]
include::discovery/discovery-settings.asciidoc[]

View File

@ -1,134 +0,0 @@
[[modules-discovery-adding-removing-nodes]]
=== Adding and removing nodes
As nodes are added or removed Elasticsearch maintains an optimal level of fault
tolerance by automatically updating the cluster's _voting configuration_, which
is the set of <<master-node,master-eligible nodes>> whose responses are counted
when making decisions such as electing a new master or committing a new cluster
state.
It is recommended to have a small and fixed number of master-eligible nodes in a
cluster, and to scale the cluster up and down by adding and removing
master-ineligible nodes only. However there are situations in which it may be
desirable to add or remove some master-eligible nodes to or from a cluster.
[[modules-discovery-adding-nodes]]
==== Adding master-eligible nodes
If you wish to add some nodes to your cluster, simply configure the new nodes
to find the existing cluster and start them up. Elasticsearch adds the new nodes
to the voting configuration if it is appropriate to do so.
During master election or when joining an existing formed cluster, a node
sends a join request to the master in order to be officially added to the
cluster. You can use the `cluster.join.timeout` setting to configure how long a
node waits after sending a request to join a cluster. Its default value is `30s`.
See <<modules-discovery-settings>>.
[[modules-discovery-removing-nodes]]
==== Removing master-eligible nodes
When removing master-eligible nodes, it is important not to remove too many all
at the same time. For instance, if there are currently seven master-eligible
nodes and you wish to reduce this to three, it is not possible simply to stop
four of the nodes at once: to do so would leave only three nodes remaining,
which is less than half of the voting configuration, which means the cluster
cannot take any further actions.
More precisely, if you shut down half or more of the master-eligible nodes all
at the same time then the cluster will normally become unavailable. If this
happens then you can bring the cluster back online by starting the removed
nodes again.
As long as there are at least three master-eligible nodes in the cluster, as a
general rule it is best to remove nodes one-at-a-time, allowing enough time for
the cluster to <<modules-discovery-quorums,automatically adjust>> the voting
configuration and adapt the fault tolerance level to the new set of nodes.
If there are only two master-eligible nodes remaining then neither node can be
safely removed since both are required to reliably make progress. To remove one
of these nodes you must first inform {es} that it should not be part of the
voting configuration, and that the voting power should instead be given to the
other node. You can then take the excluded node offline without preventing the
other node from making progress. A node which is added to a voting
configuration exclusion list still works normally, but {es} tries to remove it
from the voting configuration so its vote is no longer required. Importantly,
{es} will never automatically move a node on the voting exclusions list back
into the voting configuration. Once an excluded node has been successfully
auto-reconfigured out of the voting configuration, it is safe to shut it down
without affecting the cluster's master-level availability. A node can be added
to the voting configuration exclusion list using the
<<voting-config-exclusions>> API. For example:
[source,console]
--------------------------------------------------
# Add node to voting configuration exclusions list and wait for the system
# to auto-reconfigure the node out of the voting configuration up to the
# default timeout of 30 seconds
POST /_cluster/voting_config_exclusions?node_names=node_name
# Add node to voting configuration exclusions list and wait for
# auto-reconfiguration up to one minute
POST /_cluster/voting_config_exclusions?node_names=node_name&timeout=1m
--------------------------------------------------
// TEST[skip:this would break the test cluster if executed]
The nodes that should be added to the exclusions list are specified by name
using the `?node_names` query parameter, or by their persistent node IDs using
the `?node_ids` query parameter. If a call to the voting configuration
exclusions API fails, you can safely retry it. Only a successful response
guarantees that the node has actually been removed from the voting configuration
and will not be reinstated. If the elected master node is excluded from the
voting configuration then it will abdicate to another master-eligible node that
is still in the voting configuration if such a node is available.
Although the voting configuration exclusions API is most useful for down-scaling
a two-node to a one-node cluster, it is also possible to use it to remove
multiple master-eligible nodes all at the same time. Adding multiple nodes to
the exclusions list has the system try to auto-reconfigure all of these nodes
out of the voting configuration, allowing them to be safely shut down while
keeping the cluster available. In the example described above, shrinking a
seven-master-node cluster down to only have three master nodes, you could add
four nodes to the exclusions list, wait for confirmation, and then shut them
down simultaneously.
NOTE: Voting exclusions are only required when removing at least half of the
master-eligible nodes from a cluster in a short time period. They are not
required when removing master-ineligible nodes, nor are they required when
removing fewer than half of the master-eligible nodes.
Adding an exclusion for a node creates an entry for that node in the voting
configuration exclusions list, which has the system automatically try to
reconfigure the voting configuration to remove that node and prevents it from
returning to the voting configuration once it has removed. The current list of
exclusions is stored in the cluster state and can be inspected as follows:
[source,console]
--------------------------------------------------
GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions
--------------------------------------------------
This list is limited in size by the `cluster.max_voting_config_exclusions`
setting, which defaults to `10`. See <<modules-discovery-settings>>. Since
voting configuration exclusions are persistent and limited in number, they must
be cleaned up. Normally an exclusion is added when performing some maintenance
on the cluster, and the exclusions should be cleaned up when the maintenance is
complete. Clusters should have no voting configuration exclusions in normal
operation.
If a node is excluded from the voting configuration because it is to be shut
down permanently, its exclusion can be removed after it is shut down and removed
from the cluster. Exclusions can also be cleared if they were created in error
or were only required temporarily by specifying `?wait_for_removal=false`.
[source,console]
--------------------------------------------------
# Wait for all the nodes with voting configuration exclusions to be removed from
# the cluster and then remove all the exclusions, allowing any node to return to
# the voting configuration in the future.
DELETE /_cluster/voting_config_exclusions
# Immediately remove all the voting configuration exclusions, allowing any node
# to return to the voting configuration in the future.
DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
--------------------------------------------------

View File

@ -1,7 +1,8 @@
[[modules-discovery-settings]]
=== Discovery and cluster formation settings
Discovery and cluster formation are affected by the following settings:
<<modules-discovery,Discovery and cluster formation>> are affected by the
following settings:
`discovery.seed_hosts`::
+

View File

@ -753,6 +753,39 @@ See <<cluster-shard-allocation-filtering>>.
See <<misc-cluster-settings>>.
[role="exclude",id="modules"]
=== Modules
This page has been removed.
See <<settings,Configuring Elasticsearch>> for settings information:
* <<circuit-breaker>>
* <<modules-cluster>>
* <<modules-discovery-settings>>
* <<modules-fielddata>>
* <<modules-http>>
* <<recovery>>
* <<indexing-buffer>>
* <<modules-gateway>>
* <<modules-network>>
* <<query-cache>>
* <<search-settings>>
* <<shard-request-cache>>
For other information, see:
* <<modules-transport>>
* <<modules-threadpool>>
* <<modules-node>>
* <<modules-plugins>>
* <<modules-remote-clusters>>
[role="exclude",id="modules-discovery-adding-removing-nodes"]
=== Adding and removing nodes
See <<add-elasticsearch-nodes>>.
[role="exclude",id="_timing"]
=== Timing

View File

@ -53,6 +53,8 @@ include::modules/cluster.asciidoc[]
include::settings/ccr-settings.asciidoc[]
include::modules/discovery/discovery-settings.asciidoc[]
include::modules/indices/fielddata.asciidoc[]
include::modules/http.asciidoc[]
@ -109,6 +111,8 @@ include::setup/starting.asciidoc[]
include::setup/stopping.asciidoc[]
include::modules/discovery.asciidoc[]
include::setup/add-nodes.asciidoc[]
include::setup/restart-cluster.asciidoc[]

View File

@ -1,5 +1,5 @@
[[add-elasticsearch-nodes]]
== Adding nodes to your cluster
== Add and remove nodes in your cluster
When you start an instance of {es}, you are starting a _node_. An {es} _cluster_
is a group of nodes that have the same `cluster.name` attribute. As nodes join
@ -41,3 +41,141 @@ the rest of its cluster.
For more information about discovery and shard allocation, see
<<modules-discovery>> and <<modules-cluster>>.
[discrete]
[[add-elasticsearch-nodes-master-eligible]]
=== Master-eligible nodes
As nodes are added or removed Elasticsearch maintains an optimal level of fault
tolerance by automatically updating the cluster's _voting configuration_, which
is the set of <<master-node,master-eligible nodes>> whose responses are counted
when making decisions such as electing a new master or committing a new cluster
state.
It is recommended to have a small and fixed number of master-eligible nodes in a
cluster, and to scale the cluster up and down by adding and removing
master-ineligible nodes only. However there are situations in which it may be
desirable to add or remove some master-eligible nodes to or from a cluster.
[discrete]
[[modules-discovery-adding-nodes]]
==== Adding master-eligible nodes
If you wish to add some nodes to your cluster, simply configure the new nodes
to find the existing cluster and start them up. Elasticsearch adds the new nodes
to the voting configuration if it is appropriate to do so.
During master election or when joining an existing formed cluster, a node
sends a join request to the master in order to be officially added to the
cluster. You can use the `cluster.join.timeout` setting to configure how long a
node waits after sending a request to join a cluster. Its default value is `30s`.
See <<modules-discovery-settings>>.
[discrete]
[[modules-discovery-removing-nodes]]
==== Removing master-eligible nodes
When removing master-eligible nodes, it is important not to remove too many all
at the same time. For instance, if there are currently seven master-eligible
nodes and you wish to reduce this to three, it is not possible simply to stop
four of the nodes at once: to do so would leave only three nodes remaining,
which is less than half of the voting configuration, which means the cluster
cannot take any further actions.
More precisely, if you shut down half or more of the master-eligible nodes all
at the same time then the cluster will normally become unavailable. If this
happens then you can bring the cluster back online by starting the removed
nodes again.
As long as there are at least three master-eligible nodes in the cluster, as a
general rule it is best to remove nodes one-at-a-time, allowing enough time for
the cluster to <<modules-discovery-quorums,automatically adjust>> the voting
configuration and adapt the fault tolerance level to the new set of nodes.
If there are only two master-eligible nodes remaining then neither node can be
safely removed since both are required to reliably make progress. To remove one
of these nodes you must first inform {es} that it should not be part of the
voting configuration, and that the voting power should instead be given to the
other node. You can then take the excluded node offline without preventing the
other node from making progress. A node which is added to a voting
configuration exclusion list still works normally, but {es} tries to remove it
from the voting configuration so its vote is no longer required. Importantly,
{es} will never automatically move a node on the voting exclusions list back
into the voting configuration. Once an excluded node has been successfully
auto-reconfigured out of the voting configuration, it is safe to shut it down
without affecting the cluster's master-level availability. A node can be added
to the voting configuration exclusion list using the
<<voting-config-exclusions>> API. For example:
[source,console]
--------------------------------------------------
# Add node to voting configuration exclusions list and wait for the system
# to auto-reconfigure the node out of the voting configuration up to the
# default timeout of 30 seconds
POST /_cluster/voting_config_exclusions?node_names=node_name
# Add node to voting configuration exclusions list and wait for
# auto-reconfiguration up to one minute
POST /_cluster/voting_config_exclusions?node_names=node_name&timeout=1m
--------------------------------------------------
// TEST[skip:this would break the test cluster if executed]
The nodes that should be added to the exclusions list are specified by name
using the `?node_names` query parameter, or by their persistent node IDs using
the `?node_ids` query parameter. If a call to the voting configuration
exclusions API fails, you can safely retry it. Only a successful response
guarantees that the node has actually been removed from the voting configuration
and will not be reinstated. If the elected master node is excluded from the
voting configuration then it will abdicate to another master-eligible node that
is still in the voting configuration if such a node is available.
Although the voting configuration exclusions API is most useful for down-scaling
a two-node to a one-node cluster, it is also possible to use it to remove
multiple master-eligible nodes all at the same time. Adding multiple nodes to
the exclusions list has the system try to auto-reconfigure all of these nodes
out of the voting configuration, allowing them to be safely shut down while
keeping the cluster available. In the example described above, shrinking a
seven-master-node cluster down to only have three master nodes, you could add
four nodes to the exclusions list, wait for confirmation, and then shut them
down simultaneously.
NOTE: Voting exclusions are only required when removing at least half of the
master-eligible nodes from a cluster in a short time period. They are not
required when removing master-ineligible nodes, nor are they required when
removing fewer than half of the master-eligible nodes.
Adding an exclusion for a node creates an entry for that node in the voting
configuration exclusions list, which has the system automatically try to
reconfigure the voting configuration to remove that node and prevents it from
returning to the voting configuration once it has removed. The current list of
exclusions is stored in the cluster state and can be inspected as follows:
[source,console]
--------------------------------------------------
GET /_cluster/state?filter_path=metadata.cluster_coordination.voting_config_exclusions
--------------------------------------------------
This list is limited in size by the `cluster.max_voting_config_exclusions`
setting, which defaults to `10`. See <<modules-discovery-settings>>. Since
voting configuration exclusions are persistent and limited in number, they must
be cleaned up. Normally an exclusion is added when performing some maintenance
on the cluster, and the exclusions should be cleaned up when the maintenance is
complete. Clusters should have no voting configuration exclusions in normal
operation.
If a node is excluded from the voting configuration because it is to be shut
down permanently, its exclusion can be removed after it is shut down and removed
from the cluster. Exclusions can also be cleared if they were created in error
or were only required temporarily by specifying `?wait_for_removal=false`.
[source,console]
--------------------------------------------------
# Wait for all the nodes with voting configuration exclusions to be removed from
# the cluster and then remove all the exclusions, allowing any node to return to
# the voting configuration in the future.
DELETE /_cluster/voting_config_exclusions
# Immediately remove all the voting configuration exclusions, allowing any node
# to return to the voting configuration in the future.
DELETE /_cluster/voting_config_exclusions?wait_for_removal=false
--------------------------------------------------