mirror of
https://github.com/honeymoose/OpenSearch.git
synced 2025-03-09 14:34:43 +00:00
Change the default delayed allocation timeout from 0 (no delayed allocation) to 1m. The value came from a test of having a node with 50 shards being indexed into (so beefy translog requiring flush on shutdown), then shutting it down and starting it back up and waiting for it to join the cluster. This took, on a slow machine, about 30s. The value is conservatively low and does not try to address a virtual machine / OS restart for now, in order to not have the affect of node going away and users being concerned that shards are not being allocated to the rest of the cluster as a result of that. The setting can always be changed in order to increase the delayed allocation if needed. closes #12166
92 lines
3.5 KiB
Plaintext
92 lines
3.5 KiB
Plaintext
[[delayed-allocation]]
|
|
=== Delaying allocation when a node leaves
|
|
|
|
When a node leaves the cluster for whatever reason, intentional or otherwise,
|
|
the master reacts by:
|
|
|
|
* Promoting a replica shard to primary to replace any primaries that were on the node.
|
|
* Allocating replica shards to replace the missing replicas (assuming there are enough nodes).
|
|
* Rebalancing shards evenly across the remaining nodes.
|
|
|
|
These actions are intended to protect the cluster against data loss by
|
|
ensuring that every shard is fully replicated as soon as possible.
|
|
|
|
Even though we throttle concurrent recoveries both at the
|
|
<<recovery,node level>> and at the <<shards-allocation,cluster level>>, this
|
|
``shard-shuffle'' can still put a lot of extra load on the cluster which
|
|
may not be necessary if the missing node is likely to return soon. Imagine
|
|
this scenario:
|
|
|
|
* Node 5 loses network connectivity.
|
|
* The master promotes a replica shard to primary for each primary that was on Node 5.
|
|
* The master allocates new replicas to other nodes in the cluster.
|
|
* Each new replica makes an entire copy of the primary shard across the network.
|
|
* More shards are moved to different nodes to rebalance the cluster.
|
|
* Node 5 returns after a few minutes.
|
|
* The master rebalances the cluster by allocating shards to Node 5.
|
|
|
|
If the master had just waited for a few minutes, then the missing shards could
|
|
have been re-allocated to Node 5 with the minimum of network traffic. This
|
|
process would be even quicker for idle shards (shards not receiving indexing
|
|
requests) which have been automatically <<indices-synced-flush,sync-flushed>>.
|
|
|
|
The allocation of replica shards which become unassigned because a node has
|
|
left can be delayed with the `index.unassigned.node_left.delayed_timeout`
|
|
dynamic setting, which defaults to `1m`.
|
|
|
|
This setting can be updated on a live index (or on all indices):
|
|
|
|
[source,js]
|
|
------------------------------
|
|
PUT /_all/_settings
|
|
{
|
|
"settings": {
|
|
"index.unassigned.node_left.delayed_timeout": "5m"
|
|
}
|
|
}
|
|
------------------------------
|
|
// AUTOSENSE
|
|
|
|
With delayed allocation enabled, the above scenario changes to look like this:
|
|
|
|
* Node 5 loses network connectivity.
|
|
* The master promotes a replica shard to primary for each primary that was on Node 5.
|
|
* The master logs a message that allocation of unassigned shards has been delayed, and for how long.
|
|
* The cluster remains yellow because there are unassigned replica shards.
|
|
* Node 5 returns after a few minutes, before the `timeout` expires.
|
|
* The missing replicas are re-allocated to Node 5 (and sync-flushed shards recover almost immediately).
|
|
|
|
NOTE: This setting will not affect the promotion of replicas to primaries, nor
|
|
will it affect the assignment of replicas that have not been assigned
|
|
previously.
|
|
|
|
==== Monitoring delayed unassigned shards
|
|
|
|
The number of shards whose allocation has been delayed by this timeout setting
|
|
can be viewed with the <<cluster-health,cluster health API>>:
|
|
|
|
[source,js]
|
|
------------------------------
|
|
GET _cluster/health <1>
|
|
------------------------------
|
|
<1> This request will return a `delayed_unassigned_shards` value.
|
|
|
|
==== Removing a node permanently
|
|
|
|
If a node is not going to return and you would like Elasticsearch to allocate
|
|
the missing shards immediately, just update the timeout to zero:
|
|
|
|
|
|
[source,js]
|
|
------------------------------
|
|
PUT /_all/_settings
|
|
{
|
|
"settings": {
|
|
"index.unassigned.node_left.delayed_timeout": "0"
|
|
}
|
|
}
|
|
------------------------------
|
|
// AUTOSENSE
|
|
|
|
You can reset the timeout as soon as the missing shards have started to recover.
|