2015-06-19 10:27:28 -04:00
|
|
|
[[restart-upgrade]]
|
|
|
|
=== Full cluster restart upgrade
|
|
|
|
|
|
|
|
Elasticsearch requires a full cluster restart when upgrading across major
|
2016-04-03 10:09:24 -04:00
|
|
|
versions. Rolling upgrades are not supported across major versions. Consult
|
|
|
|
this <<setup-upgrade,table>> to verify that a full cluster restart is
|
|
|
|
required.
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
The process to perform an upgrade with a full cluster restart is as follows:
|
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Disable shard allocation*
|
|
|
|
+
|
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
When you shut down a node, the allocation process will immediately try to
|
|
|
|
replicate the shards that were on that node to other nodes in the cluster,
|
|
|
|
causing a lot of wasted I/O. This can be avoided by disabling allocation
|
|
|
|
before shutting down a node:
|
|
|
|
|
2015-07-14 12:14:09 -04:00
|
|
|
[source,js]
|
2015-06-19 10:27:28 -04:00
|
|
|
--------------------------------------------------
|
2016-04-29 10:42:03 -04:00
|
|
|
PUT _cluster/settings
|
2015-06-19 10:27:28 -04:00
|
|
|
{
|
|
|
|
"persistent": {
|
|
|
|
"cluster.routing.allocation.enable": "none"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|
2016-04-29 10:42:03 -04:00
|
|
|
// TEST[skip:indexes don't assign]
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Perform a synced flush*
|
|
|
|
+
|
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Shard recovery will be much faster if you stop indexing and issue a
|
|
|
|
<<indices-synced-flush, synced-flush>> request:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
2016-04-29 10:42:03 -04:00
|
|
|
POST _flush/synced
|
2015-06-19 10:27:28 -04:00
|
|
|
--------------------------------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
A synced flush request is a ``best effort'' operation. It will fail if there
|
|
|
|
are any pending indexing operations, but it is safe to reissue the request
|
|
|
|
multiple times if necessary.
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Shutdown and upgrade all nodes*
|
|
|
|
+
|
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Stop all Elasticsearch services on all nodes in the cluster. Each node can be
|
|
|
|
upgraded following the same procedure described in <<upgrade-node>>.
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Upgrade any plugins*
|
|
|
|
+
|
|
|
|
--
|
2016-04-03 10:09:24 -04:00
|
|
|
|
|
|
|
Elasticsearch plugins must be upgraded when upgrading a node. Use the
|
|
|
|
`elasticsearch-plugin` script to install the correct version of any plugins
|
|
|
|
that you need.
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|
2016-04-03 10:09:24 -04:00
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Start the cluster*
|
|
|
|
+
|
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
If you have dedicated master nodes -- nodes with `node.master` set to
|
|
|
|
`true`(the default) and `node.data` set to `false` -- then it is a good idea
|
|
|
|
to start them first. Wait for them to form a cluster and to elect a master
|
2015-10-26 16:47:44 -04:00
|
|
|
before proceeding with the data nodes. You can check progress by looking at the
|
2015-06-19 10:27:28 -04:00
|
|
|
logs.
|
|
|
|
|
|
|
|
As soon as the <<master-election,minimum number of master-eligible nodes>>
|
|
|
|
have discovered each other, they will form a cluster and elect a master. From
|
|
|
|
that point on, the <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>>
|
|
|
|
APIs can be used to monitor nodes joining the cluster:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/health
|
|
|
|
|
|
|
|
GET _cat/nodes
|
|
|
|
--------------------------------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Use these APIs to check that all nodes have successfully joined the cluster.
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Wait for yellow*
|
|
|
|
+
|
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
As soon as each node has joined the cluster, it will start to recover any
|
|
|
|
primary shards that are stored locally. Initially, the
|
|
|
|
<<cat-health,`_cat/health`>> request will report a `status` of `red`, meaning
|
|
|
|
that not all primary shards have been allocated.
|
|
|
|
|
|
|
|
Once each node has recovered its local shards, the `status` will become
|
|
|
|
`yellow`, meaning all primary shards have been recovered, but not all replica
|
|
|
|
shards are allocated. This is to be expected because allocation is still
|
|
|
|
disabled.
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2016-10-11 06:14:35 -04:00
|
|
|
. *Reenable allocation*
|
|
|
|
+
|
|
|
|
--
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Delaying the allocation of replicas until all nodes have joined the cluster
|
|
|
|
allows the master to allocate replicas to nodes which already have local shard
|
|
|
|
copies. At this point, with all the nodes in the cluster, it is safe to
|
|
|
|
reenable shard allocation:
|
|
|
|
|
2015-07-14 12:14:09 -04:00
|
|
|
[source,js]
|
2015-06-19 10:27:28 -04:00
|
|
|
------------------------------------------------------
|
2016-04-29 10:42:03 -04:00
|
|
|
PUT _cluster/settings
|
2015-06-19 10:27:28 -04:00
|
|
|
{
|
|
|
|
"persistent": {
|
|
|
|
"cluster.routing.allocation.enable": "all"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
------------------------------------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
The cluster will now start allocating replica shards to all data nodes. At this
|
|
|
|
point it is safe to resume indexing and searching, but your cluster will
|
|
|
|
recover more quickly if you can delay indexing and searching until all shards
|
|
|
|
have recovered.
|
|
|
|
|
|
|
|
You can monitor progress with the <<cat-health,`_cat/health`>> and
|
|
|
|
<<cat-recovery,`_cat/recovery`>> APIs:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/health
|
|
|
|
|
|
|
|
GET _cat/recovery
|
|
|
|
--------------------------------------------------
|
2016-05-09 09:42:23 -04:00
|
|
|
// CONSOLE
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Once the `status` column in the `_cat/health` output has reached `green`, all
|
|
|
|
primary and replica shards have been successfully allocated.
|
2016-10-11 06:14:35 -04:00
|
|
|
--
|