2015-06-22 17:49:45 -04:00
|
|
|
[[disk-allocator]]
|
2018-12-11 10:44:57 -05:00
|
|
|
=== Disk-based shard allocation
|
2015-06-22 17:49:45 -04:00
|
|
|
|
2018-04-30 12:31:11 -04:00
|
|
|
Elasticsearch considers the available disk space on a node before deciding
|
|
|
|
whether to allocate new shards to that node or to actively relocate shards away
|
|
|
|
from that node.
|
2015-06-22 17:49:45 -04:00
|
|
|
|
2015-08-31 09:55:00 -04:00
|
|
|
Below are the settings that can be configured in the `elasticsearch.yml` config
|
2015-06-22 17:49:45 -04:00
|
|
|
file or updated dynamically on a live cluster with the
|
|
|
|
<<cluster-update-settings,cluster-update-settings>> API:
|
|
|
|
|
|
|
|
`cluster.routing.allocation.disk.threshold_enabled`::
|
|
|
|
|
|
|
|
Defaults to `true`. Set to `false` to disable the disk allocation decider.
|
|
|
|
|
|
|
|
`cluster.routing.allocation.disk.watermark.low`::
|
|
|
|
|
2018-04-30 12:31:11 -04:00
|
|
|
Controls the low watermark for disk usage. It defaults to `85%`, meaning
|
|
|
|
that Elasticsearch will not allocate shards to nodes that have more than
|
|
|
|
85% disk used. It can also be set to an absolute byte value (like `500mb`)
|
|
|
|
to prevent Elasticsearch from allocating shards if less than the specified
|
|
|
|
amount of space is available. This setting has no effect on the primary
|
|
|
|
shards of newly-created indices or, specifically, any shards that have
|
|
|
|
never previously been allocated.
|
2015-06-22 17:49:45 -04:00
|
|
|
|
|
|
|
`cluster.routing.allocation.disk.watermark.high`::
|
|
|
|
|
2018-04-30 12:31:11 -04:00
|
|
|
Controls the high watermark. It defaults to `90%`, meaning that
|
|
|
|
Elasticsearch will attempt to relocate shards away from a node whose disk
|
|
|
|
usage is above 90%. It can also be set to an absolute byte value (similarly
|
|
|
|
to the low watermark) to relocate shards away from a node if it has less
|
|
|
|
than the specified amount of free space. This setting affects the
|
|
|
|
allocation of all shards, whether previously allocated or not.
|
2015-06-22 17:49:45 -04:00
|
|
|
|
2017-07-11 22:02:00 -04:00
|
|
|
`cluster.routing.allocation.disk.watermark.flood_stage`::
|
2017-07-07 22:11:09 -04:00
|
|
|
+
|
|
|
|
--
|
2018-04-30 12:31:11 -04:00
|
|
|
Controls the flood stage watermark. It defaults to 95%, meaning that
|
|
|
|
Elasticsearch enforces a read-only index block
|
|
|
|
(`index.blocks.read_only_allow_delete`) on every index that has one or more
|
|
|
|
shards allocated on the node that has at least one disk exceeding the flood
|
|
|
|
stage. This is a last resort to prevent nodes from running out of disk space.
|
2019-08-07 05:53:17 -04:00
|
|
|
The index block is automatically released once the disk utilization falls below
|
|
|
|
the high watermark.
|
2017-07-07 19:54:36 -04:00
|
|
|
|
|
|
|
NOTE: You can not mix the usage of percentage values and byte values within
|
|
|
|
these settings. Either all are set to percentage values, or all are set to byte
|
|
|
|
values. This is so that we can we validate that the settings are internally
|
|
|
|
consistent (that is, the low disk threshold is not more than the high disk
|
|
|
|
threshold, and the high disk threshold is not more than the flood stage
|
|
|
|
threshold).
|
2017-07-05 16:18:23 -04:00
|
|
|
|
|
|
|
An example of resetting the read-only index block on the `twitter` index:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
PUT /twitter/_settings
|
|
|
|
{
|
|
|
|
"index.blocks.read_only_allow_delete": null
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// CONSOLE
|
|
|
|
// TEST[setup:twitter]
|
2017-07-06 06:16:53 -04:00
|
|
|
--
|
2015-06-22 17:49:45 -04:00
|
|
|
|
|
|
|
`cluster.info.update.interval`::
|
|
|
|
|
|
|
|
How often Elasticsearch should check on disk usage for each node in the
|
|
|
|
cluster. Defaults to `30s`.
|
|
|
|
|
|
|
|
`cluster.routing.allocation.disk.include_relocations`::
|
|
|
|
|
|
|
|
Defaults to +true+, which means that Elasticsearch will take into account
|
2018-04-30 12:31:11 -04:00
|
|
|
shards that are currently being relocated to the target node when computing
|
|
|
|
a node's disk usage. Taking relocating shards' sizes into account may,
|
|
|
|
however, mean that the disk usage for a node is incorrectly estimated on
|
|
|
|
the high side, since the relocation could be 90% complete and a recently
|
|
|
|
retrieved disk usage would include the total size of the relocating shard
|
|
|
|
as well as the space already used by the running relocation.
|
2015-06-22 17:49:45 -04:00
|
|
|
|
|
|
|
|
2017-07-06 06:16:53 -04:00
|
|
|
NOTE: Percentage values refer to used disk space, while byte values refer to
|
|
|
|
free disk space. This can be confusing, since it flips the meaning of high and
|
|
|
|
low. For example, it makes sense to set the low watermark to 10gb and the high
|
|
|
|
watermark to 5gb, but not the other way around.
|
|
|
|
|
2017-07-07 19:54:36 -04:00
|
|
|
An example of updating the low watermark to at least 100 gigabytes free, a high
|
|
|
|
watermark of at least 50 gigabytes free, and a flood stage watermark of 10
|
|
|
|
gigabytes free, and updating the information about the cluster every minute:
|
2015-06-22 17:49:45 -04:00
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
2016-04-29 10:42:03 -04:00
|
|
|
PUT _cluster/settings
|
2015-06-22 17:49:45 -04:00
|
|
|
{
|
|
|
|
"transient": {
|
2017-07-07 19:54:36 -04:00
|
|
|
"cluster.routing.allocation.disk.watermark.low": "100gb",
|
2015-06-22 17:49:45 -04:00
|
|
|
"cluster.routing.allocation.disk.watermark.high": "50gb",
|
2017-07-11 22:02:00 -04:00
|
|
|
"cluster.routing.allocation.disk.watermark.flood_stage": "10gb",
|
2015-06-22 17:49:45 -04:00
|
|
|
"cluster.info.update.interval": "1m"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|