115 lines
5.0 KiB
Plaintext
115 lines
5.0 KiB
Plaintext
[[allocation-awareness]]
|
|
=== Shard Allocation Awareness
|
|
|
|
When running nodes on multiple VMs on the same physical server, on multiple
|
|
racks, or across multiple awareness zones, it is more likely that two nodes on
|
|
the same physical server, in the same rack, or in the same awareness zone will
|
|
crash at the same time, rather than two unrelated nodes crashing
|
|
simultaneously.
|
|
|
|
If Elasticsearch is _aware_ of the physical configuration of your hardware, it
|
|
can ensure that the primary shard and its replica shards are spread across
|
|
different physical servers, racks, or zones, to minimise the risk of losing
|
|
all shard copies at the same time.
|
|
|
|
The shard allocation awareness settings allow you to tell Elasticsearch about
|
|
your hardware configuration.
|
|
|
|
As an example, let's assume we have several racks. When we start a node, we
|
|
can tell it which rack it is in by assigning it an arbitrary metadata
|
|
attribute called `rack_id` -- we could use any attribute name. For example:
|
|
|
|
[source,sh]
|
|
----------------------
|
|
./bin/elasticsearch -Enode.attr.rack_id=rack_one <1>
|
|
----------------------
|
|
<1> This setting could also be specified in the `elasticsearch.yml` config file.
|
|
|
|
Now, we need to setup _shard allocation awareness_ by telling Elasticsearch
|
|
which attributes to use. This can be configured in the `elasticsearch.yml`
|
|
file on *all* master-eligible nodes, or it can be set (and changed) with the
|
|
<<cluster-update-settings,cluster-update-settings>> API.
|
|
|
|
For our example, we'll set the value in the config file:
|
|
|
|
[source,yaml]
|
|
--------------------------------------------------------
|
|
cluster.routing.allocation.awareness.attributes: rack_id
|
|
--------------------------------------------------------
|
|
|
|
With this config in place, let's say we start two nodes with `node.attr.rack_id`
|
|
set to `rack_one`, and we create an index with 5 primary shards and 1 replica
|
|
of each primary. All primaries and replicas are allocated across the two
|
|
nodes.
|
|
|
|
Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`,
|
|
Elasticsearch will move shards across to the new nodes, ensuring (if possible)
|
|
that no two copies of the same shard will be in the same rack. However if `rack_two`
|
|
were to fail, taking down both of its nodes, Elasticsearch will still allocate the lost
|
|
shard copies to nodes in `rack_one`.
|
|
|
|
.Prefer local shards
|
|
*********************************************
|
|
|
|
When executing search or GET requests, with shard awareness enabled,
|
|
Elasticsearch will prefer using local shards -- shards in the same awareness
|
|
group -- to execute the request. This is usually faster than crossing racks or
|
|
awareness zones.
|
|
|
|
*********************************************
|
|
|
|
Multiple awareness attributes can be specified, in which case the combination
|
|
of values from each attribute is considered to be a separate value.
|
|
|
|
[source,yaml]
|
|
-------------------------------------------------------------
|
|
cluster.routing.allocation.awareness.attributes: rack_id,zone
|
|
-------------------------------------------------------------
|
|
|
|
NOTE: When using awareness attributes, shards will not be allocated to
|
|
nodes that don't have values set for those attributes.
|
|
|
|
NOTE: Number of primary/replica of a shard allocated on a specific group
|
|
of nodes with the same awareness attribute value is determined by the number
|
|
of attribute values. When the number of nodes in groups is unbalanced and
|
|
there are many replicas, replica shards may be left unassigned.
|
|
|
|
[float]
|
|
[[forced-awareness]]
|
|
=== Forced Awareness
|
|
|
|
Imagine that you have two awareness zones and enough hardware across the two
|
|
zones to host all of your primary and replica shards. But perhaps the
|
|
hardware in a single zone, while sufficient to host half the shards, would be
|
|
unable to host *ALL* the shards.
|
|
|
|
With ordinary awareness, if one zone lost contact with the other zone,
|
|
Elasticsearch would assign all of the missing replica shards to a single zone.
|
|
But in this example, this sudden extra load would cause the hardware in the
|
|
remaining zone to be overloaded.
|
|
|
|
Forced awareness solves this problem by *NEVER* allowing copies of the same
|
|
shard to be allocated to the same zone.
|
|
|
|
For example, lets say we have an awareness attribute called `zone`, and
|
|
we know we are going to have two zones, `zone1` and `zone2`. Here is how
|
|
we can force awareness on a node:
|
|
|
|
[source,yaml]
|
|
-------------------------------------------------------------------
|
|
cluster.routing.allocation.awareness.force.zone.values: zone1,zone2 <1>
|
|
cluster.routing.allocation.awareness.attributes: zone
|
|
-------------------------------------------------------------------
|
|
<1> We must list all possible values that the `zone` attribute can have.
|
|
|
|
Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an index
|
|
with 5 shards and 1 replica. The index will be created, but only the 5 primary
|
|
shards will be allocated (with no replicas). Only when we start more nodes
|
|
with `node.attr.zone` set to `zone2` will the replicas be allocated.
|
|
|
|
The `cluster.routing.allocation.awareness.*` settings can all be updated
|
|
dynamically on a live cluster with the
|
|
<<cluster-update-settings,cluster-update-settings>> API.
|
|
|
|
|