[DOCS] Document dynamic cluster-lvl shard alloc settings (#61338) (#61735)

This commit is contained in:
James Rodewig 2020-08-31 11:19:57 -04:00 committed by GitHub
parent 8228cdad67
commit 1f24fc03a0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 42 additions and 45 deletions

View File

@ -25,10 +25,6 @@ There are a number of settings available to control the shard allocation process
Besides these, there are a few other <<misc-cluster-settings,miscellaneous cluster-level settings>>.
All of these settings are _dynamic_ and can be
updated on a live cluster with the
<<cluster-update-settings,cluster-update-settings>> API.
include::cluster/shards_allocation.asciidoc[]
include::cluster/disk_allocator.asciidoc[]

View File

@ -8,14 +8,11 @@ in the same zone, it can distribute the primary shard and its replica shards to
minimise the risk of losing all shard copies in the event of a failure.
When shard allocation awareness is enabled with the
<<dynamic-cluster-setting,dynamic>>
`cluster.routing.allocation.awareness.attributes` setting, shards are only
allocated to nodes that have values set for the specified awareness
attributes. If you use multiple awareness attributes, {es} considers
each attribute separately when allocating shards.
The allocation awareness settings can be configured in
`elasticsearch.yml` and updated dynamically with the
<<cluster-update-settings,cluster-update-settings>> API.
allocated to nodes that have values set for the specified awareness attributes.
If you use multiple awareness attributes, {es} considers each attribute
separately when allocating shards.
By default {es} uses <<search-adaptive-replica,adaptive replica selection>>
to route search or GET requests. However, with the presence of allocation awareness

View File

@ -9,7 +9,7 @@ and <<shard-allocation-awareness, allocation awareness>>.
Shard allocation filters can be based on custom node attributes or the built-in
`_name`, `_host_ip`, `_publish_ip`, `_ip`, `_host` and `_id` attributes.
The `cluster.routing.allocation` settings are dynamic, enabling live indices to
The `cluster.routing.allocation` settings are <<dynamic-cluster-setting,dynamic>>, enabling live indices to
be moved from one set of nodes to another. Shards are only relocated if it is
possible to do so without breaking another routing constraint, such as never
allocating a primary and replica shard on the same node.
@ -32,17 +32,17 @@ PUT _cluster/settings
===== Cluster routing settings
`cluster.routing.allocation.include.{attribute}`::
(<<dynamic-cluster-setting,Dynamic>>)
Allocate shards to a node whose `{attribute}` has at least one of the
comma-separated values.
`cluster.routing.allocation.require.{attribute}`::
(<<dynamic-cluster-setting,Dynamic>>)
Only allocate shards to a node whose `{attribute}` has _all_ of the
comma-separated values.
`cluster.routing.allocation.exclude.{attribute}`::
(<<dynamic-cluster-setting,Dynamic>>)
Do not allocate shards to a node whose `{attribute}` has _any_ of the
comma-separated values.

View File

@ -6,36 +6,35 @@
whether to allocate new shards to that node or to actively relocate shards away
from that node.
Below are the settings that can be configured in the `elasticsearch.yml` config
file or updated dynamically on a live cluster with the
<<cluster-update-settings,cluster-update-settings>> API:
You can use the following settings to control disk-based allocation:
`cluster.routing.allocation.disk.threshold_enabled`::
(<<dynamic-cluster-setting,Dynamic>>)
Defaults to `true`. Set to `false` to disable the disk allocation decider.
[[cluster-routing-disk-threshold]]
// tag::cluster-routing-disk-threshold-tag[]
`cluster.routing.allocation.disk.threshold_enabled` {ess-icon}::
+
(<<dynamic-cluster-setting,Dynamic>>)
Defaults to `true`. Set to `false` to disable the disk allocation decider.
// end::cluster-routing-disk-threshold-tag[]
[[cluster-routing-watermark-low]]
// tag::cluster-routing-watermark-low-tag[]
`cluster.routing.allocation.disk.watermark.low` {ess-icon}::
+
(<<dynamic-cluster-setting,Dynamic>>)
Controls the low watermark for disk usage. It defaults to `85%`, meaning that {es} will not allocate shards to nodes that have more than 85% disk used. It can also be set to an absolute byte value (like `500mb`) to prevent {es} from allocating shards if less than the specified amount of space is available. This setting has no effect on the primary shards of newly-created indices but will prevent their replicas from being allocated.
// end::cluster-routing-watermark-low-tag[]
[[cluster-routing-watermark-high]]
// tag::cluster-routing-watermark-high-tag[]
`cluster.routing.allocation.disk.watermark.high` {ess-icon}::
+
(<<dynamic-cluster-setting,Dynamic>>)
Controls the high watermark. It defaults to `90%`, meaning that {es} will attempt to relocate shards away from a node whose disk usage is above 90%. It can also be set to an absolute byte value (similarly to the low watermark) to relocate shards away from a node if it has less than the specified amount of free space. This setting affects the allocation of all shards, whether previously allocated or not.
// end::cluster-routing-watermark-high-tag[]
`cluster.routing.allocation.disk.watermark.enable_for_single_data_node`::
(<<static-cluster-setting,Static>>)
For a single data node, the default is to disregard disk watermarks when
making an allocation decision. This is deprecated behavior and will be
changed in 8.0. This setting can be set to `true` to enable the
@ -46,6 +45,7 @@ Controls the high watermark. It defaults to `90%`, meaning that {es} will attemp
`cluster.routing.allocation.disk.watermark.flood_stage` {ess-icon}::
+
--
(<<dynamic-cluster-setting,Dynamic>>)
Controls the flood stage watermark, which defaults to 95%. {es} enforces a read-only index block (`index.blocks.read_only_allow_delete`) on every index that has one or more shards allocated on the node, and that has at least one disk exceeding the flood stage. This setting is a last resort to prevent nodes from running out of disk space. The index block is automatically released when the disk utilization falls below the high watermark.
NOTE: You cannot mix the usage of percentage values and byte values within
@ -65,7 +65,7 @@ PUT /my-index-000001/_settings
// end::cluster-routing-flood-stage-tag[]
`cluster.info.update.interval`::
(<<dynamic-cluster-setting,Dynamic>>)
How often {es} should check on disk usage for each node in the
cluster. Defaults to `30s`.

View File

@ -4,16 +4,16 @@
[[cluster-read-only]]
===== Metadata
An entire cluster may be set to read-only with the following _dynamic_ setting:
An entire cluster may be set to read-only with the following setting:
`cluster.blocks.read_only`::
(<<dynamic-cluster-setting,Dynamic>>)
Make the whole cluster read only (indices do not accept write
operations), metadata is not allowed to be modified (create or delete
indices).
`cluster.blocks.read_only_allow_delete`::
(<<dynamic-cluster-setting,Dynamic>>)
Identical to `cluster.blocks.read_only` but allows to delete indices
to free up resources.
@ -51,10 +51,10 @@ including unassigned shards.
For example, an open index with 5 primary shards and 2 replicas counts as 15 shards.
Closed indices do not contribute to the shard count.
You can dynamically adjust the cluster shard limit with the following property:
You can dynamically adjust the cluster shard limit with the following setting:
`cluster.max_shards_per_node`::
(<<dynamic-cluster-setting,Dynamic>>)
Controls the number of shards allowed in the cluster per data node.
With the default setting, a 3-node cluster allows 3,000 shards total, across all open indexes.
@ -95,10 +95,10 @@ metadata will be viewable by anyone with access to the
The cluster state maintains index tombstones to explicitly denote indices that
have been deleted. The number of tombstones maintained in the cluster state is
controlled by the following property, which cannot be updated dynamically:
controlled by the following setting:
`cluster.indices.tombstones.size`::
(<<static-cluster-setting,Static>>)
Index tombstones prevent nodes that are not part of the cluster when a delete
occurs from joining the cluster and reimporting the index as though the delete
was never issued. To keep the cluster state from growing huge we only keep the
@ -114,7 +114,7 @@ this situation.
[[cluster-logger]]
===== Logger
The settings which control logging can be updated dynamically with the
The settings which control logging can be updated <<dynamic-cluster-setting,dynamically>> with the
`logger.` prefix. For instance, to increase the logging level of the
`indices.recovery` module to `DEBUG`, issue this request:
@ -139,12 +139,12 @@ tasks to be revived after a full cluster restart.
Every time a persistent task is created, the master node takes care of
assigning the task to a node of the cluster, and the assigned node will then
pick up the task and execute it locally. The process of assigning persistent
tasks to nodes is controlled by the following properties, which can be updated
dynamically:
tasks to nodes is controlled by the following settings:
`cluster.persistent_tasks.allocation.enable`::
+
--
(<<dynamic-cluster-setting,Dynamic>>)
Enable or disable allocation for persistent tasks:
* `all` - (default) Allows persistent tasks to be assigned to nodes
@ -156,7 +156,7 @@ left the cluster, for example), are impacted by this setting.
--
`cluster.persistent_tasks.allocation.recheck_interval`::
(<<dynamic-cluster-setting,Dynamic>>)
The master node will automatically check whether persistent tasks need to
be assigned when the cluster state changes significantly. However, there
may be other factors, such as memory usage, that affect whether persistent

View File

@ -1,12 +1,13 @@
[[cluster-shard-allocation-settings]]
==== Cluster-level shard allocation settings
The following _dynamic_ settings may be used to control shard allocation and recovery:
You can use the following settings to control shard allocation and recovery:
[[cluster-routing-allocation-enable]]
`cluster.routing.allocation.enable`::
+
--
(<<dynamic-cluster-setting,Dynamic>>)
Enable or disable allocation for specific kinds of shards:
* `all` - (default) Allows shard allocation for all kinds of shards.
@ -22,23 +23,23 @@ one of the active allocation ids in the cluster state.
--
`cluster.routing.allocation.node_concurrent_incoming_recoveries`::
(<<dynamic-cluster-setting,Dynamic>>)
How many concurrent incoming shard recoveries are allowed to happen on a node. Incoming recoveries are the recoveries
where the target shard (most likely the replica unless a shard is relocating) is allocated on the node. Defaults to `2`.
`cluster.routing.allocation.node_concurrent_outgoing_recoveries`::
(<<dynamic-cluster-setting,Dynamic>>)
How many concurrent outgoing shard recoveries are allowed to happen on a node. Outgoing recoveries are the recoveries
where the source shard (most likely the primary unless a shard is relocating) is allocated on the node. Defaults to `2`.
`cluster.routing.allocation.node_concurrent_recoveries`::
(<<dynamic-cluster-setting,Dynamic>>)
A shortcut to set both `cluster.routing.allocation.node_concurrent_incoming_recoveries` and
`cluster.routing.allocation.node_concurrent_outgoing_recoveries`.
`cluster.routing.allocation.node_initial_primaries_recoveries`::
(<<dynamic-cluster-setting,Dynamic>>)
While the recovery of replicas happens over the network, the recovery of
an unassigned primary after node restart uses data from the local disk.
These should be fast so more initial primary recoveries can happen in
@ -46,7 +47,7 @@ one of the active allocation ids in the cluster state.
`cluster.routing.allocation.same_shard.host`::
(<<dynamic-cluster-setting,Dynamic>>)
Allows to perform a check to prevent allocation of multiple instances of
the same shard on a single host, based on host name and host address.
Defaults to `false`, meaning that no check is performed by default. This
@ -55,13 +56,14 @@ one of the active allocation ids in the cluster state.
[[shards-rebalancing-settings]]
==== Shard rebalancing settings
The following _dynamic_ settings may be used to control the rebalancing of
shards across the cluster:
You can use the following settings to control the rebalancing of shards across
the cluster:
`cluster.routing.rebalance.enable`::
+
--
(<<dynamic-cluster-setting,Dynamic>>)
Enable or disable rebalancing for specific kinds of shards:
* `all` - (default) Allows shard balancing for all kinds of shards.
@ -74,6 +76,7 @@ Enable or disable rebalancing for specific kinds of shards:
`cluster.routing.allocation.allow_rebalance`::
+
--
(<<dynamic-cluster-setting,Dynamic>>)
Specify when shard rebalancing is allowed:
@ -83,7 +86,7 @@ Specify when shard rebalancing is allowed:
--
`cluster.routing.allocation.cluster_concurrent_rebalance`::
(<<dynamic-cluster-setting,Dynamic>>)
Allow to control how many concurrent shard rebalances are
allowed cluster wide. Defaults to `2`. Note that this setting
only controls the number of concurrent shard relocations due
@ -99,19 +102,20 @@ shard. The cluster is balanced when no allowed rebalancing operation can bring
of any node closer to the weight of any other node by more than the `balance.threshold`.
`cluster.routing.allocation.balance.shard`::
(<<dynamic-cluster-setting,Dynamic>>)
Defines the weight factor for the total number of shards allocated on a node
(float). Defaults to `0.45f`. Raising this raises the tendency to
equalize the number of shards across all nodes in the cluster.
`cluster.routing.allocation.balance.index`::
(<<dynamic-cluster-setting,Dynamic>>)
Defines the weight factor for the number of shards per index allocated
on a specific node (float). Defaults to `0.55f`. Raising this raises the
tendency to equalize the number of shards per index across all nodes in
the cluster.
`cluster.routing.allocation.balance.threshold`::
(<<dynamic-cluster-setting,Dynamic>>)
Minimal optimization value of operations that should be performed (non
negative float). Defaults to `1.0f`. Raising this will cause the cluster
to be less aggressive about optimizing the shard balance.