mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-02-06 13:08:29 +00:00
35e705877b
Today if a shard fails during initialization phase due to misconfiguration, broken disks, missing analyzers, not installed plugins etc. elasticsaerch keeps on trying to initialize or rather allocate that shard. Yet, in the worst case scenario this ends in an endless allocation loop. To prevent this loop and all it's sideeffects like spamming log files over and over again this commit adds an allocation decider that stops allocating a shard that failed more than N times in a row to allocate. The number or retries can be configured via `index.allocation.max_retry` and it's default is set to `5`. Once the setting is updated shards with less failures than the number set per index will be allowed to allocate again. Internally we maintain a counter on the UnassignedInfo that is reset to `0` once the shards has been started. Relates to #18417
118 lines
5.2 KiB
Plaintext
118 lines
5.2 KiB
Plaintext
[[cluster-reroute]]
|
|
== Cluster Reroute
|
|
|
|
The reroute command allows to explicitly execute a cluster reroute
|
|
allocation command including specific commands. For example, a shard can
|
|
be moved from one node to another explicitly, an allocation can be
|
|
canceled, or an unassigned shard can be explicitly allocated on a
|
|
specific node.
|
|
|
|
Here is a short example of how a simple reroute API call:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
|
|
"commands" : [ {
|
|
"move" :
|
|
{
|
|
"index" : "test", "shard" : 0,
|
|
"from_node" : "node1", "to_node" : "node2"
|
|
}
|
|
},
|
|
{
|
|
"allocate_replica" : {
|
|
"index" : "test", "shard" : 1, "node" : "node3"
|
|
}
|
|
}
|
|
]
|
|
}'
|
|
--------------------------------------------------
|
|
|
|
An important aspect to remember is the fact that once when an allocation
|
|
occurs, the cluster will aim at re-balancing its state back to an even
|
|
state. For example, if the allocation includes moving a shard from
|
|
`node1` to `node2`, in an `even` state, then another shard will be moved
|
|
from `node2` to `node1` to even things out.
|
|
|
|
The cluster can be set to disable allocations, which means that only the
|
|
explicitly allocations will be performed. Obviously, only once all
|
|
commands has been applied, the cluster will aim to be re-balance its
|
|
state.
|
|
|
|
Another option is to run the commands in `dry_run` (as a URI flag, or in
|
|
the request body). This will cause the commands to apply to the current
|
|
cluster state, and return the resulting cluster after the commands (and
|
|
re-balancing) has been applied.
|
|
|
|
If the `explain` parameter is specified, a detailed explanation of why the
|
|
commands could or could not be executed is returned.
|
|
|
|
The commands supported are:
|
|
|
|
`move`::
|
|
Move a started shard from one node to another node. Accepts
|
|
`index` and `shard` for index name and shard number, `from_node` for the
|
|
node to move the shard `from`, and `to_node` for the node to move the
|
|
shard to.
|
|
|
|
`cancel`::
|
|
Cancel allocation of a shard (or recovery). Accepts `index`
|
|
and `shard` for index name and shard number, and `node` for the node to
|
|
cancel the shard allocation on. It also accepts `allow_primary` flag to
|
|
explicitly specify that it is allowed to cancel allocation for a primary
|
|
shard. This can be used to force resynchronization of existing replicas
|
|
from the primary shard by cancelling them and allowing them to be
|
|
reinitialized through the standard reallocation process.
|
|
|
|
`allocate_replica`::
|
|
Allocate an unassigned replica shard to a node. Accepts the
|
|
`index` and `shard` for index name and shard number, and `node` to
|
|
allocate the shard to. Takes <<modules-cluster,allocation deciders>> into account.
|
|
|
|
Two more commands are available that allow the allocation of a primary shard
|
|
to a node. These commands should however be used with extreme care, as primary
|
|
shard allocation is usually fully automatically handled by Elasticsearch.
|
|
Reasons why a primary shard cannot be automatically allocated include the following:
|
|
|
|
- A new index was created but there is no node which satisfies the allocation deciders.
|
|
- An up-to-date shard copy of the data cannot be found on the current data nodes in
|
|
the cluster. To prevent data loss, the system does not automatically promote a stale
|
|
shard copy to primary.
|
|
|
|
As a manual override, two commands to forcefully allocate primary shards
|
|
are available:
|
|
|
|
`allocate_stale_primary`::
|
|
Allocate a primary shard to a node that holds a stale copy. Accepts the
|
|
`index` and `shard` for index name and shard number, and `node` to
|
|
allocate the shard to. Using this command may lead to data loss
|
|
for the provided shard id. If a node which has the good copy of the
|
|
data rejoins the cluster later on, that data will be overwritten with
|
|
the data of the stale copy that was forcefully allocated with this
|
|
command. To ensure that these implications are well-understood,
|
|
this command requires the special field `accept_data_loss` to be
|
|
explicitly set to `true` for it to work.
|
|
|
|
`allocate_empty_primary`::
|
|
Allocate an empty primary shard to a node. Accepts the
|
|
`index` and `shard` for index name and shard number, and `node` to
|
|
allocate the shard to. Using this command leads to a complete loss
|
|
of all data that was indexed into this shard, if it was previously
|
|
started. If a node which has a copy of the
|
|
data rejoins the cluster later on, that data will be deleted!
|
|
To ensure that these implications are well-understood,
|
|
this command requires the special field `accept_data_loss` to be
|
|
explicitly set to `true` for it to work.
|
|
|
|
[float]
|
|
=== Retry failed shards
|
|
|
|
The cluster will attempt to allocate a shard a maximum of
|
|
`index.allocation.max_retries` times in a row (defaults to `5`), before giving
|
|
up and leaving the shard unallocated. This scenario can be caused by
|
|
structural problems such as having an analyzer which refers to a stopwords
|
|
file which doesn't exist on all nodes.
|
|
|
|
Once the problem has been corrected, allocation can be manually retried by
|
|
calling the <<cluster-reroute,`_reroute`>> API with `?retry_failed`, which
|
|
will attempt a single retry round for these shards. |