2017-08-23 17:03:14 -04:00
|
|
|
[[rolling-upgrades]]
|
2017-12-05 13:58:52 -05:00
|
|
|
== Rolling upgrades
|
2017-08-23 17:03:14 -04:00
|
|
|
|
2019-03-13 17:38:13 -04:00
|
|
|
A rolling upgrade allows an {es} cluster to be upgraded one node at
|
2017-08-23 17:03:14 -04:00
|
|
|
a time so upgrading does not interrupt service. Running multiple versions of
|
2019-03-13 17:38:13 -04:00
|
|
|
{es} in the same cluster beyond the duration of an upgrade is
|
2017-08-23 17:03:14 -04:00
|
|
|
not supported, as shards cannot be replicated from upgraded nodes to nodes
|
|
|
|
running the older version.
|
|
|
|
|
2019-10-02 04:22:02 -04:00
|
|
|
It is best to upgrade the master-eligible nodes in your cluster after all of
|
|
|
|
the other nodes. Once you have started to upgrade the master-eligible nodes
|
|
|
|
they may form a cluster that nodes of older versions cannot join. If you
|
|
|
|
upgrade the master-eligible nodes last then all the other nodes will not be
|
|
|
|
running an older version and so they will be able to join the cluster.
|
2019-09-30 11:58:55 -04:00
|
|
|
|
2019-03-13 17:38:13 -04:00
|
|
|
Rolling upgrades are supported:
|
2017-08-23 17:03:14 -04:00
|
|
|
|
2019-03-13 17:38:13 -04:00
|
|
|
* Between minor versions
|
2019-07-02 03:06:14 -04:00
|
|
|
* {stack-ref-68}/upgrading-elastic-stack.html[From 5.6 to 6.8]
|
2019-05-20 15:32:44 -04:00
|
|
|
* From 6.8 to {version}
|
2018-11-28 12:38:58 -05:00
|
|
|
|
2019-05-20 15:32:44 -04:00
|
|
|
Upgrading directly to {version} from 6.7 or earlier requires a
|
2019-03-13 17:38:13 -04:00
|
|
|
<<restart-upgrade, full cluster restart>>.
|
2019-01-09 11:14:22 -05:00
|
|
|
|
2019-05-20 15:32:44 -04:00
|
|
|
To perform a rolling upgrade from 6.8 to {version}:
|
2017-08-23 17:03:14 -04:00
|
|
|
|
|
|
|
. *Disable shard allocation*.
|
|
|
|
+
|
|
|
|
--
|
|
|
|
include::disable-shard-alloc.asciidoc[]
|
|
|
|
--
|
|
|
|
|
|
|
|
. *Stop non-essential indexing and perform a synced flush.* (Optional)
|
|
|
|
+
|
|
|
|
--
|
|
|
|
While you can continue indexing during the upgrade, shard recovery
|
|
|
|
is much faster if you temporarily stop non-essential indexing and perform a
|
2019-09-30 08:42:52 -04:00
|
|
|
<<indices-synced-flush-api, synced-flush>>.
|
2017-08-23 17:03:14 -04:00
|
|
|
|
|
|
|
include::synced-flush.asciidoc[]
|
|
|
|
|
|
|
|
--
|
|
|
|
|
2019-03-25 13:23:10 -04:00
|
|
|
. *Temporarily stop the tasks associated with active {ml} jobs and {dfeeds}.* (Optional)
|
2019-02-15 12:29:45 -05:00
|
|
|
+
|
|
|
|
--
|
|
|
|
include::close-ml.asciidoc[]
|
|
|
|
--
|
2018-03-15 14:40:20 -04:00
|
|
|
|
2017-08-23 17:03:14 -04:00
|
|
|
. [[upgrade-node]] *Shut down a single node*.
|
|
|
|
+
|
|
|
|
--
|
|
|
|
include::shut-down-node.asciidoc[]
|
|
|
|
--
|
|
|
|
|
|
|
|
. *Upgrade the node you shut down.*
|
|
|
|
+
|
|
|
|
--
|
|
|
|
include::upgrade-node.asciidoc[]
|
|
|
|
include::set-paths-tip.asciidoc[]
|
2019-09-30 11:58:55 -04:00
|
|
|
|
|
|
|
[[rolling-upgrades-bootstrapping]]
|
|
|
|
NOTE: You should leave `cluster.initial_master_nodes` unset while performing a
|
|
|
|
rolling upgrade. Each upgraded node is joining an existing cluster so there is
|
|
|
|
no need for <<modules-discovery-bootstrap-cluster,cluster bootstrapping>>.
|
2017-08-23 17:03:14 -04:00
|
|
|
--
|
|
|
|
|
|
|
|
. *Upgrade any plugins.*
|
|
|
|
+
|
|
|
|
Use the `elasticsearch-plugin` script to install the upgraded version of each
|
2019-03-13 17:38:13 -04:00
|
|
|
installed {es} plugin. All plugins must be upgraded when you upgrade
|
2017-08-23 17:03:14 -04:00
|
|
|
a node.
|
|
|
|
|
2019-03-12 16:02:00 -04:00
|
|
|
. If you use {es} {security-features} to define realms, verify that your realm
|
|
|
|
settings are up-to-date. The format of realm settings changed in version 7.0, in
|
|
|
|
particular, the placement of the realm type changed. See
|
|
|
|
<<realm-settings,Realm settings>>.
|
|
|
|
|
2017-08-23 17:03:14 -04:00
|
|
|
. *Start the upgraded node.*
|
|
|
|
+
|
|
|
|
--
|
|
|
|
|
|
|
|
Start the newly-upgraded node and confirm that it joins the cluster by checking
|
|
|
|
the log file or by submitting a `_cat/nodes` request:
|
|
|
|
|
2019-09-05 10:11:25 -04:00
|
|
|
[source,console]
|
2017-08-23 17:03:14 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/nodes
|
|
|
|
--------------------------------------------------
|
|
|
|
--
|
|
|
|
|
|
|
|
. *Reenable shard allocation.*
|
|
|
|
+
|
|
|
|
--
|
|
|
|
|
2018-05-01 09:51:21 -04:00
|
|
|
Once the node has joined the cluster, remove the `cluster.routing.allocation.enable`
|
|
|
|
setting to enable shard allocation and start using the node:
|
2017-08-23 17:03:14 -04:00
|
|
|
|
2019-09-05 10:11:25 -04:00
|
|
|
[source,console]
|
2017-08-23 17:03:14 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
PUT _cluster/settings
|
|
|
|
{
|
2018-05-01 09:51:21 -04:00
|
|
|
"persistent": {
|
|
|
|
"cluster.routing.allocation.enable": null
|
2017-08-23 17:03:14 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
--
|
|
|
|
|
|
|
|
. *Wait for the node to recover.*
|
|
|
|
+
|
|
|
|
--
|
|
|
|
|
|
|
|
Before upgrading the next node, wait for the cluster to finish shard allocation.
|
|
|
|
You can check progress by submitting a <<cat-health,`_cat/health`>> request:
|
|
|
|
|
2019-09-05 10:11:25 -04:00
|
|
|
[source,console]
|
2017-08-23 17:03:14 -04:00
|
|
|
--------------------------------------------------
|
2019-04-05 14:55:21 -04:00
|
|
|
GET _cat/health?v
|
2017-08-23 17:03:14 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
Wait for the `status` column to switch from `yellow` to `green`. Once the
|
|
|
|
node is `green`, all primary and replica shards have been allocated.
|
|
|
|
|
|
|
|
[IMPORTANT]
|
|
|
|
====================================================
|
|
|
|
During a rolling upgrade, primary shards assigned to a node running the new
|
|
|
|
version cannot have their replicas assigned to a node with the old
|
|
|
|
version. The new version might have a different data format that is
|
|
|
|
not understood by the old version.
|
|
|
|
|
|
|
|
If it is not possible to assign the replica shards to another node
|
|
|
|
(there is only one upgraded node in the cluster), the replica
|
|
|
|
shards remain unassigned and status stays `yellow`.
|
|
|
|
|
|
|
|
In this case, you can proceed once there are no initializing or relocating shards
|
|
|
|
(check the `init` and `relo` columns).
|
|
|
|
|
|
|
|
As soon as another node is upgraded, the replicas can be assigned and the
|
|
|
|
status will change to `green`.
|
|
|
|
====================================================
|
|
|
|
|
2019-09-30 08:42:52 -04:00
|
|
|
Shards that were not <<indices-synced-flush-api,sync-flushed>> might take longer to
|
2017-08-23 17:03:14 -04:00
|
|
|
recover. You can monitor the recovery status of individual shards by
|
|
|
|
submitting a <<cat-recovery,`_cat/recovery`>> request:
|
|
|
|
|
2019-09-05 10:11:25 -04:00
|
|
|
[source,console]
|
2017-08-23 17:03:14 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/recovery
|
|
|
|
--------------------------------------------------
|
|
|
|
|
|
|
|
If you stopped indexing, it is safe to resume indexing as soon as
|
|
|
|
recovery completes.
|
|
|
|
--
|
|
|
|
|
|
|
|
. *Repeat*
|
|
|
|
+
|
|
|
|
--
|
|
|
|
|
|
|
|
When the node has recovered and the cluster is stable, repeat these steps
|
|
|
|
for each node that needs to be updated.
|
|
|
|
|
|
|
|
--
|
|
|
|
|
2018-07-20 17:17:48 -04:00
|
|
|
. *Restart machine learning jobs.*
|
2019-02-15 12:29:45 -05:00
|
|
|
+
|
|
|
|
--
|
|
|
|
include::open-ml.asciidoc[]
|
|
|
|
--
|
|
|
|
|
2018-03-15 14:40:20 -04:00
|
|
|
|
2017-08-23 17:03:14 -04:00
|
|
|
[IMPORTANT]
|
|
|
|
====================================================
|
|
|
|
|
|
|
|
During a rolling upgrade, the cluster continues to operate normally. However,
|
|
|
|
any new functionality is disabled or operates in a backward compatible mode
|
2019-05-16 13:36:09 -04:00
|
|
|
until all nodes in the cluster are upgraded. New functionality becomes
|
|
|
|
operational once the upgrade is complete and all nodes are running the new
|
|
|
|
version. Once that has happened, there's no way to return to operating in a
|
|
|
|
backward compatible mode. Nodes running the previous major version will not be
|
|
|
|
allowed to join the fully-updated cluster.
|
2017-08-23 17:03:14 -04:00
|
|
|
|
|
|
|
In the unlikely case of a network malfunction during the upgrade process that
|
2019-05-16 13:36:09 -04:00
|
|
|
isolates all remaining old nodes from the cluster, you must take the old nodes
|
|
|
|
offline and upgrade them to enable them to join the cluster.
|
|
|
|
|
|
|
|
If you stop half or more of the master-eligible nodes all at once during the
|
|
|
|
upgrade then the cluster will become unavailable, meaning that the upgrade is
|
|
|
|
no longer a _rolling_ upgrade. If this happens, you should upgrade and restart
|
|
|
|
all of the stopped master-eligible nodes to allow the cluster to form again, as
|
|
|
|
if performing a <<restart-upgrade,full-cluster restart upgrade>>. It may also
|
|
|
|
be necessary to upgrade all of the remaining old nodes before they can join the
|
|
|
|
cluster after it re-forms.
|
2017-08-23 17:03:14 -04:00
|
|
|
|
2018-02-20 21:18:34 -05:00
|
|
|
====================================================
|