216 lines
7.0 KiB
Plaintext
216 lines
7.0 KiB
Plaintext
[[rolling-upgrades]]
|
|
=== Rolling upgrades
|
|
|
|
A rolling upgrade allows the Elasticsearch cluster to be upgraded one node at
|
|
a time, with no downtime for end users. Running multiple versions of
|
|
Elasticsearch in the same cluster for any length of time beyond that required
|
|
for an upgrade is not supported, as shards will not be replicated from the
|
|
more recent version to the older version.
|
|
|
|
Consult this <<setup-upgrade,table>> to verify that rolling upgrades are
|
|
supported for your version of Elasticsearch.
|
|
|
|
To perform a rolling upgrade:
|
|
|
|
. *Disable shard allocation*
|
|
+
|
|
--
|
|
|
|
When you shut down a node, the allocation process will wait for one minute
|
|
before starting to replicate the shards that were on that node to other nodes
|
|
in the cluster, causing a lot of wasted I/O. This can be avoided by disabling
|
|
allocation before shutting down a node:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT _cluster/settings
|
|
{
|
|
"transient": {
|
|
"cluster.routing.allocation.enable": "none"
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
// TEST[skip:indexes don't assign]
|
|
--
|
|
|
|
. *Stop non-essential indexing and perform a synced flush (Optional)*
|
|
+
|
|
--
|
|
|
|
You may happily continue indexing during the upgrade. However, shard recovery
|
|
will be much faster if you temporarily stop non-essential indexing and issue a
|
|
<<indices-synced-flush, synced-flush>> request:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
POST _flush/synced
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
A synced flush request is a ``best effort'' operation. It will fail if there
|
|
are any pending indexing operations, but it is safe to reissue the request
|
|
multiple times if necessary.
|
|
--
|
|
|
|
. [[upgrade-node]] *Stop and upgrade a single node*
|
|
+
|
|
--
|
|
|
|
Shut down one of the nodes in the cluster *before* starting the upgrade.
|
|
|
|
[TIP]
|
|
================================================
|
|
|
|
When using the zip or tarball packages, the `config`, `data`, `logs` and
|
|
`plugins` directories are placed within the Elasticsearch home directory by
|
|
default.
|
|
|
|
It is a good idea to place these directories in a different location so that
|
|
there is no chance of deleting them when upgrading Elasticsearch. These custom
|
|
paths can be <<path-settings,configured>> with the `CONF_DIR` environment
|
|
variable, and the `path.logs`, and `path.data` settings.
|
|
|
|
The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the
|
|
appropriate place for each operating system.
|
|
|
|
================================================
|
|
|
|
To upgrade using a <<deb,Debian>> or <<rpm,RPM>> package:
|
|
|
|
* Use `rpm` or `dpkg` to install the new package. All files should be
|
|
placed in their proper locations, and config files should not be
|
|
overwritten.
|
|
|
|
To upgrade using a zip or compressed tarball:
|
|
|
|
* Extract the zip or tarball to a new directory, to be sure that you don't
|
|
overwrite the `config` or `data` directories.
|
|
|
|
* Either copy the files in the `config` directory from your old installation
|
|
to your new installation, or set the environment variable
|
|
<<config-files-location,`CONF_DIR`>> to point to a custom config directory.
|
|
|
|
* Either copy the files in the `data` directory from your old installation
|
|
to your new installation, or configure the location of the data directory
|
|
in the `config/elasticsearch.yml` file, with the `path.data` setting.
|
|
--
|
|
|
|
. *Upgrade any plugins*
|
|
+
|
|
--
|
|
|
|
Elasticsearch plugins must be upgraded when upgrading a node. Use the
|
|
`elasticsearch-plugin` script to install the correct version of any plugins
|
|
that you need.
|
|
--
|
|
|
|
. *Start the upgraded node*
|
|
+
|
|
--
|
|
|
|
Start the now upgraded node and confirm that it joins the cluster by checking
|
|
the log file or by checking the output of this request:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
GET _cat/nodes
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
--
|
|
|
|
. *Reenable shard allocation*
|
|
+
|
|
--
|
|
|
|
Once the node has joined the cluster, reenable shard allocation to start using
|
|
the node:
|
|
|
|
[source,js]
|
|
--------------------------------------------------
|
|
PUT _cluster/settings
|
|
{
|
|
"transient": {
|
|
"cluster.routing.allocation.enable": "all"
|
|
}
|
|
}
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
--
|
|
|
|
. *Wait for the node to recover*
|
|
+
|
|
--
|
|
|
|
You should wait for the cluster to finish shard allocation before upgrading
|
|
the next node. You can check on progress with the <<cat-health,`_cat/health`>>
|
|
request:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
GET _cat/health
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
Wait for the `status` column to move from `yellow` to `green`. Status `green`
|
|
means that all primary and replica shards have been allocated.
|
|
|
|
[IMPORTANT]
|
|
====================================================
|
|
During a rolling upgrade, primary shards assigned to a node with the higher
|
|
version will never have their replicas assigned to a node with the lower
|
|
version, because the newer version may have a different data format which is
|
|
not understood by the older version.
|
|
|
|
If it is not possible to assign the replica shards to another node with the
|
|
higher version -- e.g. if there is only one node with the higher version in
|
|
the cluster -- then the replica shards will remain unassigned and the
|
|
cluster health will remain status `yellow`.
|
|
|
|
In this case, check that there are no initializing or relocating shards (the
|
|
`init` and `relo` columns) before proceding.
|
|
|
|
As soon as another node is upgraded, the replicas should be assigned and the
|
|
cluster health will reach status `green`.
|
|
|
|
====================================================
|
|
|
|
Shards that have not been <<indices-synced-flush,sync-flushed>> may take some time to
|
|
recover. The recovery status of individual shards can be monitored with the
|
|
<<cat-recovery,`_cat/recovery`>> request:
|
|
|
|
[source,sh]
|
|
--------------------------------------------------
|
|
GET _cat/recovery
|
|
--------------------------------------------------
|
|
// CONSOLE
|
|
|
|
If you stopped indexing, then it is safe to resume indexing as soon as
|
|
recovery has completed.
|
|
--
|
|
|
|
. *Repeat*
|
|
+
|
|
--
|
|
|
|
When the cluster is stable and the node has recovered, repeat the above steps
|
|
for all remaining nodes.
|
|
--
|
|
|
|
[IMPORTANT]
|
|
====================================================
|
|
|
|
During a rolling upgrade the cluster will continue to operate as normal. Any
|
|
new functionality will be disabled or work in a backward compatible manner
|
|
until all nodes of the cluster have been upgraded. Once the upgrade is
|
|
completed and all nodes are on the new version, the new functionality will
|
|
become operational. Once that has happened, it is practically impossible to
|
|
go back to operating in a backward compatible mode. To protect against such a
|
|
scenario, nodes from the previous major version (e.g. 5.x) will not be allowed
|
|
to join a cluster where all nodes are of a higher major version (e.g. 6.x).
|
|
|
|
In the unlikely case of a network malfunction during upgrades, where all
|
|
remaining old nodes are isolated from the cluster, you will have to take all
|
|
old nodes offline and upgrade them before they can rejoin the cluster.
|
|
|
|
==================================================== |