Update allocation awareness docs (#29116)

Update allocation awareness docs

Today, the docs imply that if multiple attributes are specified the the
whole combination of values is considered as a single entity when
performing allocation. In fact, each attribute is considered separately. This
change fixes this discrepancy.

It also replaces the use of the term "awareness zone" with "zone or domain", and
reformats some paragraphs to the right width.

Fixes #29105
This commit is contained in:
David Turner 2018-03-19 07:04:47 +00:00 committed by GitHub
parent 0abf51af3d
commit 7608480a62
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
1 changed files with 31 additions and 31 deletions

View File

@ -2,8 +2,8 @@
=== Shard Allocation Awareness === Shard Allocation Awareness
When running nodes on multiple VMs on the same physical server, on multiple When running nodes on multiple VMs on the same physical server, on multiple
racks, or across multiple awareness zones, it is more likely that two nodes on racks, or across multiple zones or domains, it is more likely that two nodes on
the same physical server, in the same rack, or in the same awareness zone will the same physical server, in the same rack, or in the same zone or domain will
crash at the same time, rather than two unrelated nodes crashing crash at the same time, rather than two unrelated nodes crashing
simultaneously. simultaneously.
@ -25,7 +25,7 @@ attribute called `rack_id` -- we could use any attribute name. For example:
---------------------- ----------------------
<1> This setting could also be specified in the `elasticsearch.yml` config file. <1> This setting could also be specified in the `elasticsearch.yml` config file.
Now, we need to setup _shard allocation awareness_ by telling Elasticsearch Now, we need to set up _shard allocation awareness_ by telling Elasticsearch
which attributes to use. This can be configured in the `elasticsearch.yml` which attributes to use. This can be configured in the `elasticsearch.yml`
file on *all* master-eligible nodes, or it can be set (and changed) with the file on *all* master-eligible nodes, or it can be set (and changed) with the
<<cluster-update-settings,cluster-update-settings>> API. <<cluster-update-settings,cluster-update-settings>> API.
@ -37,51 +37,51 @@ For our example, we'll set the value in the config file:
cluster.routing.allocation.awareness.attributes: rack_id cluster.routing.allocation.awareness.attributes: rack_id
-------------------------------------------------------- --------------------------------------------------------
With this config in place, let's say we start two nodes with `node.attr.rack_id` With this config in place, let's say we start two nodes with
set to `rack_one`, and we create an index with 5 primary shards and 1 replica `node.attr.rack_id` set to `rack_one`, and we create an index with 5 primary
of each primary. All primaries and replicas are allocated across the two shards and 1 replica of each primary. All primaries and replicas are
nodes. allocated across the two nodes.
Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`, Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`,
Elasticsearch will move shards across to the new nodes, ensuring (if possible) Elasticsearch will move shards across to the new nodes, ensuring (if possible)
that no two copies of the same shard will be in the same rack. However if `rack_two` that no two copies of the same shard will be in the same rack. However if
were to fail, taking down both of its nodes, Elasticsearch will still allocate the lost `rack_two` were to fail, taking down both of its nodes, Elasticsearch will
shard copies to nodes in `rack_one`. still allocate the lost shard copies to nodes in `rack_one`.
.Prefer local shards .Prefer local shards
********************************************* *********************************************
When executing search or GET requests, with shard awareness enabled, When executing search or GET requests, with shard awareness enabled,
Elasticsearch will prefer using local shards -- shards in the same awareness Elasticsearch will prefer using local shards -- shards in the same awareness
group -- to execute the request. This is usually faster than crossing racks or group -- to execute the request. This is usually faster than crossing between
awareness zones. racks or across zone boundaries.
********************************************* *********************************************
Multiple awareness attributes can be specified, in which case the combination Multiple awareness attributes can be specified, in which case each attribute
of values from each attribute is considered to be a separate value. is considered separately when deciding where to allocate the shards.
[source,yaml] [source,yaml]
------------------------------------------------------------- -------------------------------------------------------------
cluster.routing.allocation.awareness.attributes: rack_id,zone cluster.routing.allocation.awareness.attributes: rack_id,zone
------------------------------------------------------------- -------------------------------------------------------------
NOTE: When using awareness attributes, shards will not be allocated to NOTE: When using awareness attributes, shards will not be allocated to nodes
nodes that don't have values set for those attributes. that don't have values set for those attributes.
NOTE: Number of primary/replica of a shard allocated on a specific group NOTE: Number of primary/replica of a shard allocated on a specific group of
of nodes with the same awareness attribute value is determined by the number nodes with the same awareness attribute value is determined by the number of
of attribute values. When the number of nodes in groups is unbalanced and attribute values. When the number of nodes in groups is unbalanced and there
there are many replicas, replica shards may be left unassigned. are many replicas, replica shards may be left unassigned.
[float] [float]
[[forced-awareness]] [[forced-awareness]]
=== Forced Awareness === Forced Awareness
Imagine that you have two awareness zones and enough hardware across the two Imagine that you have two zones and enough hardware across the two zones to
zones to host all of your primary and replica shards. But perhaps the host all of your primary and replica shards. But perhaps the hardware in a
hardware in a single zone, while sufficient to host half the shards, would be single zone, while sufficient to host half the shards, would be unable to host
unable to host *ALL* the shards. *ALL* the shards.
With ordinary awareness, if one zone lost contact with the other zone, With ordinary awareness, if one zone lost contact with the other zone,
Elasticsearch would assign all of the missing replica shards to a single zone. Elasticsearch would assign all of the missing replica shards to a single zone.
@ -91,9 +91,9 @@ remaining zone to be overloaded.
Forced awareness solves this problem by *NEVER* allowing copies of the same Forced awareness solves this problem by *NEVER* allowing copies of the same
shard to be allocated to the same zone. shard to be allocated to the same zone.
For example, lets say we have an awareness attribute called `zone`, and For example, lets say we have an awareness attribute called `zone`, and we
we know we are going to have two zones, `zone1` and `zone2`. Here is how know we are going to have two zones, `zone1` and `zone2`. Here is how we can
we can force awareness on a node: force awareness on a node:
[source,yaml] [source,yaml]
------------------------------------------------------------------- -------------------------------------------------------------------
@ -102,10 +102,10 @@ cluster.routing.allocation.awareness.attributes: zone
------------------------------------------------------------------- -------------------------------------------------------------------
<1> We must list all possible values that the `zone` attribute can have. <1> We must list all possible values that the `zone` attribute can have.
Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an index Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an
with 5 shards and 1 replica. The index will be created, but only the 5 primary index with 5 shards and 1 replica. The index will be created, but only the 5
shards will be allocated (with no replicas). Only when we start more nodes primary shards will be allocated (with no replicas). Only when we start more
with `node.attr.zone` set to `zone2` will the replicas be allocated. nodes with `node.attr.zone` set to `zone2` will the replicas be allocated.
The `cluster.routing.allocation.awareness.*` settings can all be updated The `cluster.routing.allocation.awareness.*` settings can all be updated
dynamically on a live cluster with the dynamically on a live cluster with the