2019-03-12 09:43:29 -04:00
|
|
|
[[node-tool]]
|
|
|
|
== elasticsearch-node
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
The `elasticsearch-node` command enables you to perform certain unsafe
|
|
|
|
operations on a node that are only possible while it is shut down. This command
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
allows you to adjust the <<modules-node,role>> of a node, unsafely edit cluster
|
|
|
|
settings and may be able to recover some data after a disaster or start a node
|
|
|
|
even if it is incompatible with the data on disk.
|
2019-03-12 09:43:29 -04:00
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Synopsis
|
|
|
|
|
|
|
|
[source,shell]
|
|
|
|
--------------------------------------------------
|
2019-05-21 02:52:01 -04:00
|
|
|
bin/elasticsearch-node repurpose|unsafe-bootstrap|detach-cluster|override-version
|
2019-03-12 09:43:29 -04:00
|
|
|
[--ordinal <Integer>] [-E <KeyValuePair>]
|
|
|
|
[-h, --help] ([-s, --silent] | [-v, --verbose])
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Description
|
|
|
|
|
2020-01-14 12:33:53 -05:00
|
|
|
This tool has a number of modes:
|
2019-04-09 09:02:03 -04:00
|
|
|
|
|
|
|
* `elasticsearch-node repurpose` can be used to delete unwanted data from a
|
|
|
|
node if it used to be a <<data-node,data node>> or a
|
|
|
|
<<master-node,master-eligible node>> but has been repurposed not to have one
|
|
|
|
or other of these roles.
|
|
|
|
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
* `elasticsearch-node remove-settings` can be used to remove persistent settings
|
2020-01-14 12:33:53 -05:00
|
|
|
from the cluster state in case where it contains incompatible settings that
|
|
|
|
prevent the cluster from forming.
|
|
|
|
|
|
|
|
* `elasticsearch-node remove-customs` can be used to remove custom metadata
|
|
|
|
from the cluster state in case where it contains broken metadata that
|
|
|
|
prevents the cluster state from being loaded.
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
* `elasticsearch-node unsafe-bootstrap` can be used to perform _unsafe cluster
|
|
|
|
bootstrapping_. It forces one of the nodes to form a brand-new cluster on
|
|
|
|
its own, using its local copy of the cluster metadata.
|
|
|
|
|
|
|
|
* `elasticsearch-node detach-cluster` enables you to move nodes from one
|
|
|
|
cluster to another. This can be used to move nodes into a new cluster
|
2020-01-31 04:32:40 -05:00
|
|
|
created with the `elasticsearch-node unsafe-bootstrap` command. If unsafe
|
2019-04-09 09:02:03 -04:00
|
|
|
cluster bootstrapping was not possible, it also enables you to move nodes
|
|
|
|
into a brand-new cluster.
|
|
|
|
|
2019-05-21 02:52:01 -04:00
|
|
|
* `elasticsearch-node override-version` enables you to start up a node
|
|
|
|
even if the data in the data path was written by an incompatible version of
|
|
|
|
{es}. This may sometimes allow you to downgrade to an earlier version of
|
|
|
|
{es}.
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
[[node-tool-repurpose]]
|
|
|
|
[float]
|
|
|
|
==== Changing the role of a node
|
|
|
|
|
|
|
|
There may be situations where you want to repurpose a node without following
|
|
|
|
the <<change-node-role,proper repurposing processes>>. The `elasticsearch-node
|
|
|
|
repurpose` tool allows you to delete any excess on-disk data and start a node
|
|
|
|
after repurposing it.
|
|
|
|
|
|
|
|
The intended use is:
|
|
|
|
|
|
|
|
* Stop the node
|
Introduce node.roles setting (#58512)
Today we have individual settings for configuring node roles such as
node.data and node.master. Additionally, roles are pluggable and we have
used this to introduce roles such as node.ml and node.voting_only. As
the number of roles is growing, managing these becomes harder for the
user. For example, to create a master-only node, today a user has to
configure:
- node.data: false
- node.ingest: false
- node.remote_cluster_client: false
- node.ml: false
at a minimum if they are relying on defaults, but also add:
- node.master: true
- node.transform: false
- node.voting_only: false
If they want to be explicit. This is also challenging in cases where a
user wants to have configure a coordinating-only node which requires
disabling all roles, a list which we are adding to, requiring the user
to keep checking whether a node has acquired any of these roles.
This commit addresses this by adding a list setting node.roles for which
a user has explicit control over the list of roles that a node has. If
the setting is configured, the node has exactly the roles in the list,
and not any additional roles. This means to configure a master-only
node, the setting is merely 'node.roles: [master]', and to configure a
coordinating-only node, the setting is merely: 'node.roles: []'.
With this change we deprecate the existing 'node.*' settings such as
'node.data'.
2020-06-25 14:14:51 -04:00
|
|
|
* Update `elasticsearch.yml` by setting `node.roles` as desired.
|
2019-04-09 09:02:03 -04:00
|
|
|
* Run `elasticsearch-node repurpose` on the node
|
|
|
|
* Start the node
|
|
|
|
|
Introduce node.roles setting (#58512)
Today we have individual settings for configuring node roles such as
node.data and node.master. Additionally, roles are pluggable and we have
used this to introduce roles such as node.ml and node.voting_only. As
the number of roles is growing, managing these becomes harder for the
user. For example, to create a master-only node, today a user has to
configure:
- node.data: false
- node.ingest: false
- node.remote_cluster_client: false
- node.ml: false
at a minimum if they are relying on defaults, but also add:
- node.master: true
- node.transform: false
- node.voting_only: false
If they want to be explicit. This is also challenging in cases where a
user wants to have configure a coordinating-only node which requires
disabling all roles, a list which we are adding to, requiring the user
to keep checking whether a node has acquired any of these roles.
This commit addresses this by adding a list setting node.roles for which
a user has explicit control over the list of roles that a node has. If
the setting is configured, the node has exactly the roles in the list,
and not any additional roles. This means to configure a master-only
node, the setting is merely 'node.roles: [master]', and to configure a
coordinating-only node, the setting is merely: 'node.roles: []'.
With this change we deprecate the existing 'node.*' settings such as
'node.data'.
2020-06-25 14:14:51 -04:00
|
|
|
If you run `elasticsearch-node repurpose` on a node without the `data` role and
|
|
|
|
with the `master` role then it will delete any remaining shard data on that
|
|
|
|
node, but it will leave the index and cluster metadata alone. If you run
|
|
|
|
`elasticsearch-node repurpose` on a node without the `data` and `master` roles
|
|
|
|
then it will delete any remaining shard data and index metadata, but it will
|
|
|
|
leave the cluster metadata alone.
|
2019-04-09 09:02:03 -04:00
|
|
|
|
|
|
|
[WARNING]
|
|
|
|
Running this command can lead to data loss for the indices mentioned if the
|
|
|
|
data contained is not available on other nodes in the cluster. Only run this
|
|
|
|
tool if you understand and accept the possible consequences, and only after
|
|
|
|
determining that the node cannot be repurposed cleanly.
|
|
|
|
|
|
|
|
The tool provides a summary of the data to be deleted and asks for confirmation
|
|
|
|
before making any changes. You can get detailed information about the affected
|
|
|
|
indices and shards by passing the verbose (`-v`) option.
|
|
|
|
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
[float]
|
|
|
|
==== Removing persistent cluster settings
|
|
|
|
|
|
|
|
There may be situations where a node contains persistent cluster
|
|
|
|
settings that prevent the cluster from forming. Since the cluster cannot form,
|
|
|
|
it is not possible to remove these settings using the
|
|
|
|
<<cluster-update-settings>> API.
|
|
|
|
|
|
|
|
The `elasticsearch-node remove-settings` tool allows you to forcefully remove
|
|
|
|
those persistent settings from the on-disk cluster state. The tool takes a
|
|
|
|
list of settings as parameters that should be removed, and also supports
|
|
|
|
wildcard patterns.
|
|
|
|
|
|
|
|
The intended use is:
|
|
|
|
|
|
|
|
* Stop the node
|
|
|
|
* Run `elasticsearch-node remove-settings name-of-setting-to-remove` on the node
|
|
|
|
* Repeat for all other master-eligible nodes
|
|
|
|
* Start the nodes
|
|
|
|
|
2020-01-14 12:33:53 -05:00
|
|
|
[float]
|
|
|
|
==== Removing custom metadata from the cluster state
|
|
|
|
|
|
|
|
There may be situations where a node contains custom metadata, typically
|
|
|
|
provided by plugins, that prevent the node from starting up and loading
|
|
|
|
the cluster from disk.
|
|
|
|
|
|
|
|
The `elasticsearch-node remove-customs` tool allows you to forcefully remove
|
|
|
|
the problematic custom metadata. The tool takes a list of custom metadata names
|
|
|
|
as parameters that should be removed, and also supports wildcard patterns.
|
|
|
|
|
|
|
|
The intended use is:
|
|
|
|
|
|
|
|
* Stop the node
|
|
|
|
* Run `elasticsearch-node remove-customs name-of-custom-to-remove` on the node
|
|
|
|
* Repeat for all other master-eligible nodes
|
|
|
|
* Start the nodes
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
[float]
|
|
|
|
==== Recovering data after a disaster
|
|
|
|
|
2019-03-12 09:43:29 -04:00
|
|
|
Sometimes {es} nodes are temporarily stopped, perhaps because of the need to
|
|
|
|
perform some maintenance activity or perhaps because of a hardware failure.
|
|
|
|
After you resolve the temporary condition and restart the node,
|
|
|
|
it will rejoin the cluster and continue normally. Depending on your
|
|
|
|
configuration, your cluster may be able to remain completely available even
|
|
|
|
while one or more of its nodes are stopped.
|
|
|
|
|
|
|
|
Sometimes it might not be possible to restart a node after it has stopped. For
|
|
|
|
example, the node's host may suffer from a hardware problem that cannot be
|
|
|
|
repaired. If the cluster is still available then you can start up a fresh node
|
|
|
|
on another host and {es} will bring this node into the cluster in place of the
|
|
|
|
failed node.
|
|
|
|
|
|
|
|
Each node stores its data in the data directories defined by the
|
|
|
|
<<path-settings,`path.data` setting>>. This means that in a disaster you can
|
|
|
|
also restart a node by moving its data directories to another host, presuming
|
|
|
|
that those data directories can be recovered from the faulty host.
|
|
|
|
|
|
|
|
{es} <<modules-discovery-quorums,requires a response from a majority of the
|
|
|
|
master-eligible nodes>> in order to elect a master and to update the cluster
|
|
|
|
state. This means that if you have three master-eligible nodes then the cluster
|
|
|
|
will remain available even if one of them has failed. However if two of the
|
|
|
|
three master-eligible nodes fail then the cluster will be unavailable until at
|
|
|
|
least one of them is restarted.
|
|
|
|
|
|
|
|
In very rare circumstances it may not be possible to restart enough nodes to
|
|
|
|
restore the cluster's availability. If such a disaster occurs, you should
|
|
|
|
build a new cluster from a recent snapshot and re-import any data that was
|
|
|
|
ingested since that snapshot was taken.
|
|
|
|
|
|
|
|
However, if the disaster is serious enough then it may not be possible to
|
|
|
|
recover from a recent snapshot either. Unfortunately in this case there is no
|
|
|
|
way forward that does not risk data loss, but it may be possible to use the
|
|
|
|
`elasticsearch-node` tool to construct a new cluster that contains some of the
|
|
|
|
data from the failed cluster.
|
|
|
|
|
2019-05-21 02:52:01 -04:00
|
|
|
[[node-tool-override-version]]
|
|
|
|
[float]
|
|
|
|
==== Bypassing version checks
|
|
|
|
|
|
|
|
The data that {es} writes to disk is designed to be read by the current version
|
|
|
|
and a limited set of future versions. It cannot generally be read by older
|
|
|
|
versions, nor by versions that are more than one major version newer. The data
|
|
|
|
stored on disk includes the version of the node that wrote it, and {es} checks
|
|
|
|
that it is compatible with this version when starting up.
|
|
|
|
|
|
|
|
In rare circumstances it may be desirable to bypass this check and start up an
|
|
|
|
{es} node using data that was written by an incompatible version. This may not
|
|
|
|
work if the format of the stored data has changed, and it is a risky process
|
|
|
|
because it is possible for the format to change in ways that {es} may
|
|
|
|
misinterpret, silently leading to data loss.
|
|
|
|
|
|
|
|
To bypass this check, you can use the `elasticsearch-node override-version`
|
|
|
|
tool to overwrite the version number stored in the data path with the current
|
|
|
|
version, causing {es} to believe that it is compatible with the on-disk data.
|
2019-03-12 09:43:29 -04:00
|
|
|
|
|
|
|
[[node-tool-unsafe-bootstrap]]
|
|
|
|
[float]
|
2019-04-09 09:02:03 -04:00
|
|
|
===== Unsafe cluster bootstrapping
|
2019-03-12 09:43:29 -04:00
|
|
|
|
|
|
|
If there is at least one remaining master-eligible node, but it is not possible
|
|
|
|
to restart a majority of them, then the `elasticsearch-node unsafe-bootstrap`
|
|
|
|
command will unsafely override the cluster's <<modules-discovery-voting,voting
|
|
|
|
configuration>> as if performing another
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
<<modules-discovery-bootstrap-cluster,cluster bootstrapping process>>.
|
2019-03-12 09:43:29 -04:00
|
|
|
The target node can then form a new cluster on its own by using
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
the cluster metadata held locally on the target node.
|
2019-03-12 09:43:29 -04:00
|
|
|
|
|
|
|
[WARNING]
|
|
|
|
These steps can lead to arbitrary data loss since the target node may not hold the latest cluster
|
|
|
|
metadata, and this out-of-date metadata may make it impossible to use some or
|
|
|
|
all of the indices in the cluster.
|
|
|
|
|
|
|
|
Since unsafe bootstrapping forms a new cluster containing a single node, once
|
|
|
|
you have run it you must use the <<node-tool-detach-cluster,`elasticsearch-node
|
|
|
|
detach-cluster` tool>> to migrate any other surviving nodes from the failed
|
|
|
|
cluster into this new cluster.
|
|
|
|
|
|
|
|
When you run the `elasticsearch-node unsafe-bootstrap` tool it will analyse the
|
|
|
|
state of the node and ask for confirmation before taking any action. Before
|
|
|
|
asking for confirmation it reports the term and version of the cluster state on
|
|
|
|
the node on which it runs as follows:
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
Current node cluster state (term, version) pair is (4, 12)
|
|
|
|
----
|
|
|
|
|
|
|
|
If you have a choice of nodes on which to run this tool then you should choose
|
|
|
|
one with a term that is as large as possible. If there is more than one
|
|
|
|
node with the same term, pick the one with the largest version.
|
|
|
|
This information identifies the node with the freshest cluster state, which minimizes the
|
|
|
|
quantity of data that might be lost. For example, if the first node reports
|
|
|
|
`(4, 12)` and a second node reports `(5, 3)`, then the second node is preferred
|
|
|
|
since its term is larger. However if the second node reports `(3, 17)` then
|
|
|
|
the first node is preferred since its term is larger. If the second node
|
|
|
|
reports `(4, 10)` then it has the same term as the first node, but has a
|
|
|
|
smaller version, so the first node is preferred.
|
|
|
|
|
|
|
|
[WARNING]
|
|
|
|
Running this command can lead to arbitrary data loss. Only run this tool if you
|
|
|
|
understand and accept the possible consequences and have exhausted all other
|
|
|
|
possibilities for recovery of your cluster.
|
|
|
|
|
|
|
|
The sequence of operations for using this tool are as follows:
|
|
|
|
|
|
|
|
1. Make sure you have really lost access to at least half of the
|
|
|
|
master-eligible nodes in the cluster, and they cannot be repaired or recovered
|
|
|
|
by moving their data paths to healthy hardware.
|
|
|
|
2. Stop **all** remaining nodes.
|
|
|
|
3. Choose one of the remaining master-eligible nodes to become the new elected
|
|
|
|
master as described above.
|
|
|
|
4. On this node, run the `elasticsearch-node unsafe-bootstrap` command as shown
|
|
|
|
below. Verify that the tool reported `Master node was successfully
|
|
|
|
bootstrapped`.
|
|
|
|
5. Start this node and verify that it is elected as the master node.
|
|
|
|
6. Run the <<node-tool-detach-cluster,`elasticsearch-node detach-cluster`
|
|
|
|
tool>>, described below, on every other node in the cluster.
|
|
|
|
7. Start all other nodes and verify that each one joins the cluster.
|
|
|
|
8. Investigate the data in the cluster to discover if any was lost during this
|
|
|
|
process.
|
|
|
|
|
|
|
|
When you run the tool it will make sure that the node that is being used to
|
|
|
|
bootstrap the cluster is not running. It is important that all other
|
|
|
|
master-eligible nodes are also stopped while this tool is running, but the tool
|
|
|
|
does not check this.
|
|
|
|
|
|
|
|
The message `Master node was successfully bootstrapped` does not mean that
|
|
|
|
there has been no data loss, it just means that tool was able to complete its
|
|
|
|
job.
|
|
|
|
|
|
|
|
[[node-tool-detach-cluster]]
|
|
|
|
[float]
|
2019-04-09 09:02:03 -04:00
|
|
|
===== Detaching nodes from their cluster
|
2019-03-12 09:43:29 -04:00
|
|
|
|
|
|
|
It is unsafe for nodes to move between clusters, because different clusters
|
|
|
|
have completely different cluster metadata. There is no way to safely merge the
|
|
|
|
metadata from two clusters together.
|
|
|
|
|
|
|
|
To protect against inadvertently joining the wrong cluster, each cluster
|
|
|
|
creates a unique identifier, known as the _cluster UUID_, when it first starts
|
|
|
|
up. Every node records the UUID of its cluster and refuses to join a
|
|
|
|
cluster with a different UUID.
|
|
|
|
|
|
|
|
However, if a node's cluster has permanently failed then it may be desirable to
|
|
|
|
try and move it into a new cluster. The `elasticsearch-node detach-cluster`
|
|
|
|
command lets you detach a node from its cluster by resetting its cluster UUID.
|
|
|
|
It can then join another cluster with a different UUID.
|
|
|
|
|
|
|
|
For example, after unsafe cluster bootstrapping you will need to detach all the
|
|
|
|
other surviving nodes from their old cluster so they can join the new,
|
|
|
|
unsafely-bootstrapped cluster.
|
|
|
|
|
|
|
|
Unsafe cluster bootstrapping is only possible if there is at least one
|
|
|
|
surviving master-eligible node. If there are no remaining master-eligible nodes
|
|
|
|
then the cluster metadata is completely lost. However, the individual data
|
|
|
|
nodes also contain a copy of the index metadata corresponding with their
|
|
|
|
shards. This sometimes allows a new cluster to import these shards as
|
|
|
|
<<modules-gateway-dangling-indices,dangling indices>>. You can sometimes
|
|
|
|
recover some indices after the loss of all master-eligible nodes in a cluster
|
|
|
|
by creating a new cluster and then using the `elasticsearch-node
|
|
|
|
detach-cluster` command to move any surviving nodes into this new cluster.
|
|
|
|
|
|
|
|
There is a risk of data loss when importing a dangling index because data nodes
|
|
|
|
may not have the most recent copy of the index metadata and do not have any
|
|
|
|
information about <<docs-replication,which shard copies are in-sync>>. This
|
|
|
|
means that a stale shard copy may be selected to be the primary, and some of
|
|
|
|
the shards may be incompatible with the imported mapping.
|
|
|
|
|
|
|
|
[WARNING]
|
|
|
|
Execution of this command can lead to arbitrary data loss. Only run this tool
|
|
|
|
if you understand and accept the possible consequences and have exhausted all
|
|
|
|
other possibilities for recovery of your cluster.
|
|
|
|
|
|
|
|
The sequence of operations for using this tool are as follows:
|
|
|
|
|
|
|
|
1. Make sure you have really lost access to every one of the master-eligible
|
|
|
|
nodes in the cluster, and they cannot be repaired or recovered by moving their
|
|
|
|
data paths to healthy hardware.
|
|
|
|
2. Start a new cluster and verify that it is healthy. This cluster may comprise
|
|
|
|
one or more brand-new master-eligible nodes, or may be an unsafely-bootstrapped
|
|
|
|
cluster formed as described above.
|
|
|
|
3. Stop **all** remaining data nodes.
|
|
|
|
4. On each data node, run the `elasticsearch-node detach-cluster` tool as shown
|
|
|
|
below. Verify that the tool reported `Node was successfully detached from the
|
|
|
|
cluster`.
|
|
|
|
5. If necessary, configure each data node to
|
|
|
|
<<modules-discovery-hosts-providers,discover the new cluster>>.
|
|
|
|
6. Start each data node and verify that it has joined the new cluster.
|
|
|
|
7. Wait for all recoveries to have completed, and investigate the data in the
|
|
|
|
cluster to discover if any was lost during this process.
|
|
|
|
|
|
|
|
The message `Node was successfully detached from the cluster` does not mean
|
|
|
|
that there has been no data loss, it just means that tool was able to complete
|
|
|
|
its job.
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
|
2019-03-12 09:43:29 -04:00
|
|
|
[float]
|
|
|
|
=== Parameters
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
`repurpose`:: Delete excess data when a node's roles are changed.
|
|
|
|
|
2019-03-12 09:43:29 -04:00
|
|
|
`unsafe-bootstrap`:: Specifies to unsafely bootstrap this node as a new
|
|
|
|
one-node cluster.
|
|
|
|
|
|
|
|
`detach-cluster`:: Specifies to unsafely detach this node from its cluster so
|
|
|
|
it can join a different cluster.
|
|
|
|
|
2019-05-21 02:52:01 -04:00
|
|
|
`override-version`:: Overwrites the version number stored in the data path so
|
|
|
|
that a node can start despite being incompatible with the on-disk data.
|
|
|
|
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
`remove-settings`:: Forcefully removes the provided persistent cluster settings
|
|
|
|
from the on-disk cluster state.
|
|
|
|
|
2019-03-12 09:43:29 -04:00
|
|
|
`--ordinal <Integer>`:: If there is <<max-local-storage-nodes,more than one
|
|
|
|
node sharing a data path>> then this specifies which node to target. Defaults
|
|
|
|
to `0`, meaning to use the first node in the data path.
|
|
|
|
|
|
|
|
`-E <KeyValuePair>`:: Configures a setting.
|
|
|
|
|
|
|
|
`-h, --help`:: Returns all of the command parameters.
|
|
|
|
|
|
|
|
`-s, --silent`:: Shows minimal output.
|
|
|
|
|
|
|
|
`-v, --verbose`:: Shows verbose output.
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Examples
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
[float]
|
Introduce node.roles setting (#58512)
Today we have individual settings for configuring node roles such as
node.data and node.master. Additionally, roles are pluggable and we have
used this to introduce roles such as node.ml and node.voting_only. As
the number of roles is growing, managing these becomes harder for the
user. For example, to create a master-only node, today a user has to
configure:
- node.data: false
- node.ingest: false
- node.remote_cluster_client: false
- node.ml: false
at a minimum if they are relying on defaults, but also add:
- node.master: true
- node.transform: false
- node.voting_only: false
If they want to be explicit. This is also challenging in cases where a
user wants to have configure a coordinating-only node which requires
disabling all roles, a list which we are adding to, requiring the user
to keep checking whether a node has acquired any of these roles.
This commit addresses this by adding a list setting node.roles for which
a user has explicit control over the list of roles that a node has. If
the setting is configured, the node has exactly the roles in the list,
and not any additional roles. This means to configure a master-only
node, the setting is merely 'node.roles: [master]', and to configure a
coordinating-only node, the setting is merely: 'node.roles: []'.
With this change we deprecate the existing 'node.*' settings such as
'node.data'.
2020-06-25 14:14:51 -04:00
|
|
|
==== Repurposing a node as a dedicated master node
|
2019-04-09 09:02:03 -04:00
|
|
|
|
|
|
|
In this example, a former data node is repurposed as a dedicated master node.
|
Introduce node.roles setting (#58512)
Today we have individual settings for configuring node roles such as
node.data and node.master. Additionally, roles are pluggable and we have
used this to introduce roles such as node.ml and node.voting_only. As
the number of roles is growing, managing these becomes harder for the
user. For example, to create a master-only node, today a user has to
configure:
- node.data: false
- node.ingest: false
- node.remote_cluster_client: false
- node.ml: false
at a minimum if they are relying on defaults, but also add:
- node.master: true
- node.transform: false
- node.voting_only: false
If they want to be explicit. This is also challenging in cases where a
user wants to have configure a coordinating-only node which requires
disabling all roles, a list which we are adding to, requiring the user
to keep checking whether a node has acquired any of these roles.
This commit addresses this by adding a list setting node.roles for which
a user has explicit control over the list of roles that a node has. If
the setting is configured, the node has exactly the roles in the list,
and not any additional roles. This means to configure a master-only
node, the setting is merely 'node.roles: [master]', and to configure a
coordinating-only node, the setting is merely: 'node.roles: []'.
With this change we deprecate the existing 'node.*' settings such as
'node.data'.
2020-06-25 14:14:51 -04:00
|
|
|
First update the node's settings to `node.roles: [ "master" ]` in its
|
|
|
|
`elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose`
|
|
|
|
command to find and remove excess shard data:
|
2019-04-09 09:02:03 -04:00
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node$ ./bin/elasticsearch-node repurpose
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
Found 2 shards in 2 indices to clean up
|
|
|
|
Use -v to see list of paths and indices affected
|
|
|
|
Node is being re-purposed as master and no-data. Clean-up of shard data will be performed.
|
|
|
|
Do you want to proceed?
|
|
|
|
Confirm [y/N] y
|
|
|
|
Node successfully repurposed to master and no-data.
|
|
|
|
----
|
|
|
|
|
|
|
|
[float]
|
Introduce node.roles setting (#58512)
Today we have individual settings for configuring node roles such as
node.data and node.master. Additionally, roles are pluggable and we have
used this to introduce roles such as node.ml and node.voting_only. As
the number of roles is growing, managing these becomes harder for the
user. For example, to create a master-only node, today a user has to
configure:
- node.data: false
- node.ingest: false
- node.remote_cluster_client: false
- node.ml: false
at a minimum if they are relying on defaults, but also add:
- node.master: true
- node.transform: false
- node.voting_only: false
If they want to be explicit. This is also challenging in cases where a
user wants to have configure a coordinating-only node which requires
disabling all roles, a list which we are adding to, requiring the user
to keep checking whether a node has acquired any of these roles.
This commit addresses this by adding a list setting node.roles for which
a user has explicit control over the list of roles that a node has. If
the setting is configured, the node has exactly the roles in the list,
and not any additional roles. This means to configure a master-only
node, the setting is merely 'node.roles: [master]', and to configure a
coordinating-only node, the setting is merely: 'node.roles: []'.
With this change we deprecate the existing 'node.*' settings such as
'node.data'.
2020-06-25 14:14:51 -04:00
|
|
|
==== Repurposing a node as a coordinating-only node
|
2019-04-09 09:02:03 -04:00
|
|
|
|
|
|
|
In this example, a node that previously held data is repurposed as a
|
Introduce node.roles setting (#58512)
Today we have individual settings for configuring node roles such as
node.data and node.master. Additionally, roles are pluggable and we have
used this to introduce roles such as node.ml and node.voting_only. As
the number of roles is growing, managing these becomes harder for the
user. For example, to create a master-only node, today a user has to
configure:
- node.data: false
- node.ingest: false
- node.remote_cluster_client: false
- node.ml: false
at a minimum if they are relying on defaults, but also add:
- node.master: true
- node.transform: false
- node.voting_only: false
If they want to be explicit. This is also challenging in cases where a
user wants to have configure a coordinating-only node which requires
disabling all roles, a list which we are adding to, requiring the user
to keep checking whether a node has acquired any of these roles.
This commit addresses this by adding a list setting node.roles for which
a user has explicit control over the list of roles that a node has. If
the setting is configured, the node has exactly the roles in the list,
and not any additional roles. This means to configure a master-only
node, the setting is merely 'node.roles: [master]', and to configure a
coordinating-only node, the setting is merely: 'node.roles: []'.
With this change we deprecate the existing 'node.*' settings such as
'node.data'.
2020-06-25 14:14:51 -04:00
|
|
|
coordinating-only node. First update the node's settings to `node.roles: []` in
|
|
|
|
its `elasticsearch.yml` config file. Then run the `elasticsearch-node repurpose`
|
|
|
|
command to find and remove excess shard data and index metadata:
|
|
|
|
|
2019-04-09 09:02:03 -04:00
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node$./bin/elasticsearch-node repurpose
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
Found 2 indices (2 shards and 2 index meta data) to clean up
|
|
|
|
Use -v to see list of paths and indices affected
|
|
|
|
Node is being re-purposed as no-master and no-data. Clean-up of index data will be performed.
|
|
|
|
Do you want to proceed?
|
|
|
|
Confirm [y/N] y
|
|
|
|
Node successfully repurposed to no-master and no-data.
|
|
|
|
----
|
|
|
|
|
Move metadata storage to Lucene (#50928)
* Move metadata storage to Lucene (#50907)
Today we split the on-disk cluster metadata across many files: one file for the metadata of each
index, plus one file for the global metadata and another for the manifest. Most metadata updates
only touch a few of these files, but some must write them all. If a node holds a large number of
indices then it's possible its disks are not fast enough to process a complete metadata update before timing out. In severe cases affecting master-eligible nodes this can prevent an election
from succeeding.
This commit uses Lucene as a metadata storage for the cluster state, and is a squashed version
of the following PRs that were targeting a feature branch:
* Introduce Lucene-based metadata persistence (#48733)
This commit introduces `LucenePersistedState` which master-eligible nodes
can use to persist the cluster metadata in a Lucene index rather than in
many separate files.
Relates #48701
* Remove per-index metadata without assigned shards (#49234)
Today on master-eligible nodes we maintain per-index metadata files for every
index. However, we also keep this metadata in the `LucenePersistedState`, and
only use the per-index metadata files for importing dangling indices. However
there is no point in importing a dangling index without any shard data, so we
do not need to maintain these extra files any more.
This commit removes per-index metadata files from nodes which do not hold any
shards of those indices.
Relates #48701
* Use Lucene exclusively for metadata storage (#50144)
This moves metadata persistence to Lucene for all node types. It also reenables BWC and adds
an interoperability layer for upgrades from prior versions.
This commit disables a number of tests related to dangling indices and command-line tools.
Those will be addressed in follow-ups.
Relates #48701
* Add command-line tool support for Lucene-based metadata storage (#50179)
Adds command-line tool support (unsafe-bootstrap, detach-cluster, repurpose, & shard
commands) for the Lucene-based metadata storage.
Relates #48701
* Use single directory for metadata (#50639)
Earlier PRs for #48701 introduced a separate directory for the cluster state. This is not needed
though, and introduces an additional unnecessary cognitive burden to the users.
Co-Authored-By: David Turner <david.turner@elastic.co>
* Add async dangling indices support (#50642)
Adds support for writing out dangling indices in an asynchronous way. Also provides an option to
avoid writing out dangling indices at all.
Relates #48701
* Fold node metadata into new node storage (#50741)
Moves node metadata to uses the new storage mechanism (see #48701) as the authoritative source.
* Write CS asynchronously on data-only nodes (#50782)
Writes cluster states out asynchronously on data-only nodes. The main reason for writing out
the cluster state at all is so that the data-only nodes can snap into a cluster, that they can do a
bit of bootstrap validation and so that the shard recovery tools work.
Cluster states that are written asynchronously have their voting configuration adapted to a non
existing configuration so that these nodes cannot mistakenly become master even if their node
role is changed back and forth.
Relates #48701
* Remove persistent cluster settings tool (#50694)
Adds the elasticsearch-node remove-settings tool to remove persistent settings from the on
disk cluster state in case where it contains incompatible settings that prevent the cluster from
forming.
Relates #48701
* Make cluster state writer resilient to disk issues (#50805)
Adds handling to make the cluster state writer resilient to disk issues. Relates to #48701
* Omit writing global metadata if no change (#50901)
Uses the same optimization for the new cluster state storage layer as the old one, writing global
metadata only when changed. Avoids writing out the global metadata if none of the persistent
fields changed. Speeds up server:integTest by ~10%.
Relates #48701
* DanglingIndicesIT should ensure node removed first (#50896)
These tests occasionally failed because the deletion was submitted before the
restarting node was removed from the cluster, causing the deletion not to be
fully acked. This commit fixes this by checking the restarting node has been
removed from the cluster.
Co-authored-by: David Turner <david.turner@elastic.co>
* fix tests
Co-authored-by: David Turner <david.turner@elastic.co>
2020-01-14 03:35:43 -05:00
|
|
|
[float]
|
|
|
|
==== Removing persistent cluster settings
|
|
|
|
|
|
|
|
If your nodes contain persistent cluster settings that prevent the cluster
|
|
|
|
from forming, i.e., can't be removed using the <<cluster-update-settings>> API,
|
|
|
|
you can run the following commands to remove one or more cluster settings.
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.exporters.my_exporter.host
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
The following settings will be removed:
|
|
|
|
xpack.monitoring.exporters.my_exporter.host: "10.1.2.3"
|
|
|
|
|
|
|
|
You should only run this tool if you have incompatible settings in the
|
|
|
|
cluster state that prevent the cluster from forming.
|
|
|
|
This tool can cause data loss and its use should be your last resort.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] y
|
|
|
|
|
|
|
|
Settings were successfully removed from the cluster state
|
|
|
|
----
|
|
|
|
|
|
|
|
You can also use wildcards to remove multiple settings, for example using
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node$ ./bin/elasticsearch-node remove-settings xpack.monitoring.*
|
|
|
|
----
|
|
|
|
|
2020-01-14 12:33:53 -05:00
|
|
|
[float]
|
|
|
|
==== Removing custom metadata from the cluster state
|
|
|
|
|
|
|
|
If the on-disk cluster state contains custom metadata that prevents the node
|
|
|
|
from starting up and loading the cluster state, you can run the following
|
|
|
|
commands to remove this custom metadata.
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node$ ./bin/elasticsearch-node remove-customs snapshot_lifecycle
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
The following customs will be removed:
|
|
|
|
snapshot_lifecycle
|
|
|
|
|
|
|
|
You should only run this tool if you have broken custom metadata in the
|
|
|
|
cluster state that prevents the cluster state from being loaded.
|
|
|
|
This tool can cause data loss and its use should be your last resort.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] y
|
|
|
|
|
|
|
|
Customs were successfully removed from the cluster state
|
|
|
|
----
|
|
|
|
|
2019-03-12 09:43:29 -04:00
|
|
|
[float]
|
|
|
|
==== Unsafe cluster bootstrapping
|
|
|
|
|
|
|
|
Suppose your cluster had five master-eligible nodes and you have permanently
|
|
|
|
lost three of them, leaving two nodes remaining.
|
|
|
|
|
|
|
|
* Run the tool on the first remaining node, but answer `n` at the confirmation
|
|
|
|
step.
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node_1$ ./bin/elasticsearch-node unsafe-bootstrap
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
Current node cluster state (term, version) pair is (4, 12)
|
|
|
|
|
|
|
|
You should only run this tool if you have permanently lost half or more
|
|
|
|
of the master-eligible nodes in this cluster, and you cannot restore the
|
|
|
|
cluster from a snapshot. This tool can cause arbitrary data loss and its
|
|
|
|
use should be your last resort. If you have multiple surviving master
|
|
|
|
eligible nodes, you should run this tool on the node with the highest
|
|
|
|
cluster state (term, version) pair.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] n
|
|
|
|
----
|
|
|
|
|
|
|
|
* Run the tool on the second remaining node, and again answer `n` at the
|
|
|
|
confirmation step.
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node_2$ ./bin/elasticsearch-node unsafe-bootstrap
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
Current node cluster state (term, version) pair is (5, 3)
|
|
|
|
|
|
|
|
You should only run this tool if you have permanently lost half or more
|
|
|
|
of the master-eligible nodes in this cluster, and you cannot restore the
|
|
|
|
cluster from a snapshot. This tool can cause arbitrary data loss and its
|
|
|
|
use should be your last resort. If you have multiple surviving master
|
|
|
|
eligible nodes, you should run this tool on the node with the highest
|
|
|
|
cluster state (term, version) pair.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] n
|
|
|
|
----
|
|
|
|
|
|
|
|
* Since the second node has a greater term it has a fresher cluster state, so
|
|
|
|
it is better to unsafely bootstrap the cluster using this node:
|
|
|
|
|
|
|
|
[source,txt]
|
|
|
|
----
|
|
|
|
node_2$ ./bin/elasticsearch-node unsafe-bootstrap
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
Current node cluster state (term, version) pair is (5, 3)
|
|
|
|
|
|
|
|
You should only run this tool if you have permanently lost half or more
|
|
|
|
of the master-eligible nodes in this cluster, and you cannot restore the
|
|
|
|
cluster from a snapshot. This tool can cause arbitrary data loss and its
|
|
|
|
use should be your last resort. If you have multiple surviving master
|
|
|
|
eligible nodes, you should run this tool on the node with the highest
|
|
|
|
cluster state (term, version) pair.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] y
|
|
|
|
Master node was successfully bootstrapped
|
|
|
|
----
|
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Detaching nodes from their cluster
|
|
|
|
|
|
|
|
After unsafely bootstrapping a new cluster, run the `elasticsearch-node
|
|
|
|
detach-cluster` command to detach all remaining nodes from the failed cluster
|
|
|
|
so they can join the new cluster:
|
|
|
|
|
|
|
|
[source, txt]
|
|
|
|
----
|
|
|
|
node_3$ ./bin/elasticsearch-node detach-cluster
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
You should only run this tool if you have permanently lost all of the
|
|
|
|
master-eligible nodes in this cluster and you cannot restore the cluster
|
|
|
|
from a snapshot, or you have already unsafely bootstrapped a new cluster
|
|
|
|
by running `elasticsearch-node unsafe-bootstrap` on a master-eligible
|
|
|
|
node that belonged to the same cluster as this node. This tool can cause
|
|
|
|
arbitrary data loss and its use should be your last resort.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] y
|
|
|
|
Node was successfully detached from the cluster
|
|
|
|
----
|
2019-05-21 02:52:01 -04:00
|
|
|
|
|
|
|
[float]
|
|
|
|
==== Bypassing version checks
|
|
|
|
|
|
|
|
Run the `elasticsearch-node override-version` command to overwrite the version
|
|
|
|
stored in the data path so that a node can start despite being incompatible
|
|
|
|
with the data stored in the data path:
|
|
|
|
|
|
|
|
[source, txt]
|
|
|
|
----
|
|
|
|
node$ ./bin/elasticsearch-node override-version
|
|
|
|
|
|
|
|
WARNING: Elasticsearch MUST be stopped before running this tool.
|
|
|
|
|
|
|
|
This data path was last written by Elasticsearch version [x.x.x] and may no
|
|
|
|
longer be compatible with Elasticsearch version [y.y.y]. This tool will bypass
|
|
|
|
this compatibility check, allowing a version [y.y.y] node to start on this data
|
|
|
|
path, but a version [y.y.y] node may not be able to read this data or may read
|
|
|
|
it incorrectly leading to data loss.
|
|
|
|
|
|
|
|
You should not use this tool. Instead, continue to use a version [x.x.x] node
|
|
|
|
on this data path. If necessary, you can use reindex-from-remote to copy the
|
|
|
|
data from here into an older cluster.
|
|
|
|
|
|
|
|
Do you want to proceed?
|
|
|
|
|
|
|
|
Confirm [y/N] y
|
|
|
|
Successfully overwrote this node's metadata to bypass its version compatibility checks.
|
|
|
|
----
|