2013-08-28 19:24:34 -04:00
|
|
|
[[index-modules-allocation]]
|
|
|
|
== Index Shard Allocation
|
|
|
|
|
|
|
|
[float]
|
2013-09-30 17:32:00 -04:00
|
|
|
[[shard-allocation-filtering]]
|
2013-08-28 19:24:34 -04:00
|
|
|
=== Shard Allocation Filtering
|
|
|
|
|
2014-03-07 08:21:45 -05:00
|
|
|
Allows to control the allocation of indices on nodes based on include/exclude
|
2013-08-28 19:24:34 -04:00
|
|
|
filters. The filters can be set both on the index level and on the
|
|
|
|
cluster level. Lets start with an example of setting it on the cluster
|
|
|
|
level:
|
|
|
|
|
|
|
|
Lets say we have 4 nodes, each has specific attribute called `tag`
|
|
|
|
associated with it (the name of the attribute can be any name). Each
|
|
|
|
node has a specific value associated with `tag`. Node 1 has a setting
|
|
|
|
`node.tag: value1`, Node 2 a setting of `node.tag: value2`, and so on.
|
|
|
|
|
|
|
|
We can create an index that will only deploy on nodes that have `tag`
|
|
|
|
set to `value1` and `value2` by setting
|
|
|
|
`index.routing.allocation.include.tag` to `value1,value2`. For example:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
curl -XPUT localhost:9200/test/_settings -d '{
|
|
|
|
"index.routing.allocation.include.tag" : "value1,value2"
|
|
|
|
}'
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
On the other hand, we can create an index that will be deployed on all
|
|
|
|
nodes except for nodes with a `tag` of value `value3` by setting
|
|
|
|
`index.routing.allocation.exclude.tag` to `value3`. For example:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
curl -XPUT localhost:9200/test/_settings -d '{
|
|
|
|
"index.routing.allocation.exclude.tag" : "value3"
|
|
|
|
}'
|
|
|
|
--------------------------------------------------
|
|
|
|
|
2014-04-04 12:22:29 -04:00
|
|
|
`index.routing.allocation.require.*` can be used to
|
2013-08-28 19:24:34 -04:00
|
|
|
specify a number of rules, all of which MUST match in order for a shard
|
|
|
|
to be allocated to a node. This is in contrast to `include` which will
|
|
|
|
include a node if ANY rule matches.
|
|
|
|
|
|
|
|
The `include`, `exclude` and `require` values can have generic simple
|
2015-05-05 04:03:15 -04:00
|
|
|
matching wildcards, for example, `value1*`. Additionally, special attribute
|
2014-05-15 17:17:49 -04:00
|
|
|
names called `_ip`, `_name`, `_id` and `_host` can be used to match by node
|
2014-06-01 07:27:08 -04:00
|
|
|
ip address, name, id or host name, respectively.
|
2013-08-28 19:24:34 -04:00
|
|
|
|
|
|
|
Obviously a node can have several attributes associated with it, and
|
|
|
|
both the attribute name and value are controlled in the setting. For
|
|
|
|
example, here is a sample of several node configurations:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
node.group1: group1_value1
|
|
|
|
node.group2: group2_value4
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
In the same manner, `include`, `exclude` and `require` can work against
|
|
|
|
several attributes, for example:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
curl -XPUT localhost:9200/test/_settings -d '{
|
|
|
|
"index.routing.allocation.include.group1" : "xxx"
|
|
|
|
"index.routing.allocation.include.group2" : "yyy",
|
|
|
|
"index.routing.allocation.exclude.group3" : "zzz",
|
|
|
|
"index.routing.allocation.require.group4" : "aaa",
|
|
|
|
}'
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
The provided settings can also be updated in real time using the update
|
|
|
|
settings API, allowing to "move" indices (shards) around in realtime.
|
|
|
|
|
|
|
|
Cluster wide filtering can also be defined, and be updated in real time
|
|
|
|
using the cluster update settings API. This setting can come in handy
|
|
|
|
for things like decommissioning nodes (even if the replica count is set
|
|
|
|
to 0). Here is a sample of how to decommission a node based on `_ip`
|
|
|
|
address:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
curl -XPUT localhost:9200/_cluster/settings -d '{
|
|
|
|
"transient" : {
|
|
|
|
"cluster.routing.allocation.exclude._ip" : "10.0.0.1"
|
|
|
|
}
|
|
|
|
}'
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
[float]
|
|
|
|
=== Total Shards Per Node
|
|
|
|
|
|
|
|
The `index.routing.allocation.total_shards_per_node` setting allows to
|
2014-10-06 17:01:37 -04:00
|
|
|
control how many total shards (replicas and primaries) for an index will be allocated per node.
|
2013-08-28 19:24:34 -04:00
|
|
|
It can be dynamically set on a live index using the update index
|
|
|
|
settings API.
|
2013-08-16 14:20:56 -04:00
|
|
|
|
|
|
|
[float]
|
2013-09-25 12:17:40 -04:00
|
|
|
[[disk]]
|
2013-08-16 14:20:56 -04:00
|
|
|
=== Disk-based Shard Allocation
|
2013-09-16 09:56:48 -04:00
|
|
|
|
2014-09-26 15:04:42 -04:00
|
|
|
disk based shard allocation is enabled from version 1.3.0 onward
|
2014-05-15 17:17:49 -04:00
|
|
|
|
2014-04-02 17:01:12 -04:00
|
|
|
Elasticsearch can be configured to prevent shard
|
2013-08-16 14:20:56 -04:00
|
|
|
allocation on nodes depending on disk usage for the node. This
|
2014-05-15 17:17:49 -04:00
|
|
|
functionality is enabled by default, and can be changed either in the
|
2013-08-16 14:20:56 -04:00
|
|
|
configuration file, or dynamically using:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
curl -XPUT localhost:9200/_cluster/settings -d '{
|
|
|
|
"transient" : {
|
2014-05-15 17:17:49 -04:00
|
|
|
"cluster.routing.allocation.disk.threshold_enabled" : false
|
2013-08-16 14:20:56 -04:00
|
|
|
}
|
|
|
|
}'
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
Once enabled, Elasticsearch uses two watermarks to decide whether
|
|
|
|
shards should be allocated or can remain on the node.
|
|
|
|
|
|
|
|
`cluster.routing.allocation.disk.watermark.low` controls the low
|
2014-05-15 17:17:49 -04:00
|
|
|
watermark for disk usage. It defaults to 85%, meaning ES will not
|
|
|
|
allocate new shards to nodes once they have more than 85% disk
|
2013-08-16 14:20:56 -04:00
|
|
|
used. It can also be set to an absolute byte value (like 500mb) to
|
|
|
|
prevent ES from allocating shards if less than the configured amount
|
|
|
|
of space is available.
|
|
|
|
|
|
|
|
`cluster.routing.allocation.disk.watermark.high` controls the high
|
2014-05-15 17:17:49 -04:00
|
|
|
watermark. It defaults to 90%, meaning ES will attempt to relocate
|
|
|
|
shards to another node if the node disk usage rises above 90%. It can
|
2013-08-16 14:20:56 -04:00
|
|
|
also be set to an absolute byte value (similar to the low watermark)
|
|
|
|
to relocate shards once less than the configured amount of space is
|
|
|
|
available on the node.
|
|
|
|
|
2014-11-11 07:27:42 -05:00
|
|
|
NOTE: Percentage values refer to used disk space, while byte values refer to
|
|
|
|
free disk space. This can be confusing, since it flips the meaning of
|
|
|
|
high and low. For example, it makes sense to set the low watermark to 10gb
|
|
|
|
and the high watermark to 5gb, but not the other way around.
|
|
|
|
|
2013-08-16 14:20:56 -04:00
|
|
|
Both watermark settings can be changed dynamically using the cluster
|
|
|
|
settings API. By default, Elasticsearch will retrieve information
|
|
|
|
about the disk usage of the nodes every 30 seconds. This can also be
|
|
|
|
changed by setting the `cluster.info.update.interval` setting.
|
2014-09-18 08:09:51 -04:00
|
|
|
|
2015-04-22 13:53:19 -04:00
|
|
|
An example of updating the low watermark to no more than 80% of the disk size, a
|
|
|
|
high watermark of at least 50 gigabytes free, and updating the information about
|
|
|
|
the cluster every minute:
|
|
|
|
|
|
|
|
[source,js]
|
|
|
|
--------------------------------------------------
|
|
|
|
curl -XPUT localhost:9200/_cluster/settings -d '{
|
|
|
|
"transient" : {
|
|
|
|
"cluster.routing.allocation.disk.watermark.low" : "80%",
|
|
|
|
"cluster.routing.allocation.disk.watermark.high" : "50gb",
|
|
|
|
"cluster.info.update.interval" : "1m"
|
|
|
|
}
|
|
|
|
}'
|
|
|
|
--------------------------------------------------
|
|
|
|
|
2014-09-18 08:09:51 -04:00
|
|
|
By default, Elasticsearch will take into account shards that are currently being
|
|
|
|
relocated to the target node when computing a node's disk usage. This can be
|
|
|
|
changed by setting the `cluster.routing.allocation.disk.include_relocations`
|
|
|
|
setting to `false` (defaults to `true`). Taking relocating shards' sizes into
|
|
|
|
account may, however, mean that the disk usage for a node is incorrectly
|
|
|
|
estimated on the high side, since the relocation could be 90% complete and a
|
|
|
|
recently retrieved disk usage would include the total size of the relocating
|
|
|
|
shard as well as the space already used by the running relocation.
|