148 lines
4.2 KiB
Plaintext
148 lines
4.2 KiB
Plaintext
[[restart-upgrade]]
|
|
=== Full cluster restart upgrade
|
|
|
|
Elasticsearch requires a full cluster restart when upgrading across major
|
|
versions. Rolling upgrades are not supported across major versions. Consult
|
|
this <<setup-upgrade,table>> to verify that a full cluster restart is
|
|
required.
|
|
|
|
The process to perform an upgrade with a full cluster restart is as follows:
|
|
|
|
. *Disable shard allocation*
|
|
+
|
|
--
|
|
|
|
When you shut down a node, the allocation process will immediately try to
|
|
replicate the shards that were on that node to other nodes in the cluster,
|
|
causing a lot of wasted I/O. This can be avoided by disabling allocation
|
|
before shutting down a node:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT _cluster/settings
|
|
{
|
|
"persistent": {
|
|
"cluster.routing.allocation.enable": "none"
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[skip:indexes don't assign]
|
|
--
|
|
|
|
. *Perform a synced flush*
|
|
+
|
|
--
|
|
|
|
Shard recovery will be much faster if you stop indexing and issue a
|
|
<<indices-synced-flush, synced-flush>> request:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
POST _flush/synced
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
A synced flush request is a ``best effort'' operation. It will fail if there
|
|
are any pending indexing operations, but it is safe to reissue the request
|
|
multiple times if necessary.
|
|
--
|
|
|
|
. *Shutdown and upgrade all nodes*
|
|
+
|
|
--
|
|
|
|
Stop all Elasticsearch services on all nodes in the cluster. Each node can be
|
|
upgraded following the same procedure described in <<upgrade-node>>.
|
|
--
|
|
|
|
. *Upgrade any plugins*
|
|
+
|
|
--
|
|
|
|
Elasticsearch plugins must be upgraded when upgrading a node. Use the
|
|
`elasticsearch-plugin` script to install the correct version of any plugins
|
|
that you need.
|
|
--
|
|
|
|
. *Start the cluster*
|
|
+
|
|
--
|
|
|
|
If you have dedicated master nodes -- nodes with `node.master` set to
|
|
`true`(the default) and `node.data` set to `false` -- then it is a good idea
|
|
to start them first. Wait for them to form a cluster and to elect a master
|
|
before proceeding with the data nodes. You can check progress by looking at the
|
|
logs.
|
|
|
|
As soon as the <<master-election,minimum number of master-eligible nodes>>
|
|
have discovered each other, they will form a cluster and elect a master. From
|
|
that point on, the <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>>
|
|
APIs can be used to monitor nodes joining the cluster:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
GET _cat/health
|
|
|
|
GET _cat/nodes
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
Use these APIs to check that all nodes have successfully joined the cluster.
|
|
--
|
|
|
|
. *Wait for yellow*
|
|
+
|
|
--
|
|
|
|
As soon as each node has joined the cluster, it will start to recover any
|
|
primary shards that are stored locally. Initially, the
|
|
<<cat-health,`_cat/health`>> request will report a `status` of `red`, meaning
|
|
that not all primary shards have been allocated.
|
|
|
|
Once each node has recovered its local shards, the `status` will become
|
|
`yellow`, meaning all primary shards have been recovered, but not all replica
|
|
shards are allocated. This is to be expected because allocation is still
|
|
disabled.
|
|
--
|
|
|
|
. *Reenable allocation*
|
|
+
|
|
--
|
|
|
|
Delaying the allocation of replicas until all nodes have joined the cluster
|
|
allows the master to allocate replicas to nodes which already have local shard
|
|
copies. At this point, with all the nodes in the cluster, it is safe to
|
|
reenable shard allocation:
|
|
|
|
[source,js]
|
|
------------------------------------------------------
|
|
PUT _cluster/settings
|
|
{
|
|
"persistent": {
|
|
"cluster.routing.allocation.enable": "all"
|
|
}
|
|
}
|
|
------------------------------------------------------
|
|
// CONSOLE
|
|
|
|
The cluster will now start allocating replica shards to all data nodes. At this
|
|
point it is safe to resume indexing and searching, but your cluster will
|
|
recover more quickly if you can delay indexing and searching until all shards
|
|
have recovered.
|
|
|
|
You can monitor progress with the <<cat-health,`_cat/health`>> and
|
|
<<cat-recovery,`_cat/recovery`>> APIs:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
GET _cat/health
|
|
|
|
GET _cat/recovery
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
Once the `status` column in the `_cat/health` output has reached `green`, all
|
|
primary and replica shards have been successfully allocated.
|
|
--
|