2015-06-19 10:27:28 -04:00
|
|
|
[[rolling-upgrades]]
|
|
|
|
=== Rolling upgrades
|
|
|
|
|
|
|
|
A rolling upgrade allows the Elasticsearch cluster to be upgraded one node at
|
|
|
|
a time, with no downtime for end users. Running multiple versions of
|
|
|
|
Elasticsearch in the same cluster for any length of time beyond that required
|
|
|
|
for an upgrade is not supported, as shards will not be replicated from the
|
|
|
|
more recent version to the older version.
|
|
|
|
|
|
|
|
Consult this <<setup-upgrade,table>> to verify that rolling upgrades are
|
|
|
|
supported for your version of Elasticsearch.
|
|
|
|
|
|
|
|
To perform a rolling upgrade:
|
|
|
|
|
|
|
|
==== Step 1: Disable shard allocation
|
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
When you shut down a node, the allocation process will wait for one minute
|
|
|
|
before starting to replicate the shards that were on that node to other nodes
|
|
|
|
in the cluster, causing a lot of wasted I/O. This can be avoided by disabling
|
|
|
|
allocation before shutting down a node:
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2015-07-14 12:14:09 -04:00
|
|
|
[source,js]
|
2015-06-19 10:27:28 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
PUT /_cluster/settings
|
|
|
|
{
|
|
|
|
"transient": {
|
|
|
|
"cluster.routing.allocation.enable": "none"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// AUTOSENSE
|
|
|
|
|
|
|
|
==== Step 2: Stop non-essential indexing and perform a synced flush (Optional)
|
|
|
|
|
|
|
|
You may happily continue indexing during the upgrade. However, shard recovery
|
|
|
|
will be much faster if you temporarily stop non-essential indexing and issue a
|
|
|
|
<<indices-synced-flush, synced-flush>> request:
|
|
|
|
|
2015-07-14 12:14:09 -04:00
|
|
|
[source,js]
|
2015-06-19 10:27:28 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
POST /_flush/synced
|
|
|
|
--------------------------------------------------
|
|
|
|
// AUTOSENSE
|
|
|
|
|
|
|
|
A synced flush request is a ``best effort'' operation. It will fail if there
|
|
|
|
are any pending indexing operations, but it is safe to reissue the request
|
|
|
|
multiple times if necessary.
|
|
|
|
|
|
|
|
[[upgrade-node]]
|
|
|
|
==== Step 3: Stop and upgrade a single node
|
|
|
|
|
|
|
|
Shut down one of the nodes in the cluster *before* starting the upgrade.
|
|
|
|
|
|
|
|
[TIP]
|
|
|
|
================================================
|
|
|
|
|
|
|
|
When using the zip or tarball packages, the `config`, `data`, `logs` and
|
|
|
|
`plugins` directories are placed within the Elasticsearch home directory by
|
|
|
|
default.
|
|
|
|
|
|
|
|
It is a good idea to place these directories in a different location so that
|
|
|
|
there is no chance of deleting them when upgrading Elasticsearch. These
|
2016-04-03 10:09:24 -04:00
|
|
|
custom paths can be <<path-settings,configured>> with the `path.conf`,
|
|
|
|
`path.logs`, and `path.data` settings.
|
2015-06-19 10:27:28 -04:00
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the
|
|
|
|
appropriate place for each operating system.
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
================================================
|
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
To upgrade using a <<deb,Debian>> or <<rpm,RPM>> package:
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
* Use `rpm` or `dpkg` to install the new package. All files should be
|
|
|
|
placed in their proper locations, and config files should not be
|
|
|
|
overwritten.
|
|
|
|
|
|
|
|
To upgrade using a zip or compressed tarball:
|
|
|
|
|
|
|
|
* Extract the zip or tarball to a new directory, to be sure that you don't
|
|
|
|
overwrite the `config` or `data` directories.
|
|
|
|
|
|
|
|
* Either copy the files in the `config` directory from your old installation
|
Bootstrap does not set system properties
Today, certain bootstrap properties are set and read via system
properties. This action-at-distance way of managing these properties is
rather confusing, and completely unnecessary. But another problem exists
with setting these as system properties. Namely, these system properties
are interpreted as Elasticsearch settings, not all of which are
registered. This leads to Elasticsearch failing to startup if any of
these special properties are set. Instead, these properties should be
kept as local as possible, and passed around as method parameters where
needed. This eliminates the action-at-distance way of handling these
properties, and eliminates the need to register these non-setting
properties. This commit does exactly that.
Additionally, today we use the "-D" command line flag to set the
properties, but this is confusing because "-D" is a special flag to the
JVM for setting system properties. This creates confusion because some
"-D" properties should be passed via arguments to the JVM (so via
ES_JAVA_OPTS), and some should be passed as arguments to
Elasticsearch. This commit changes the "-D" flag for Elasticsearch
settings to "-E".
2016-03-13 09:10:56 -04:00
|
|
|
to your new installation, or use the `-E path.conf=` option on the command
|
2015-06-19 10:27:28 -04:00
|
|
|
line to point to an external config directory.
|
|
|
|
|
|
|
|
* Either copy the files in the `data` directory from your old installation
|
|
|
|
to your new installation, or configure the location of the data directory
|
|
|
|
in the `config/elasticsearch.yml` file, with the `path.data` setting.
|
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
==== Step 4: Upgrade any plugins
|
|
|
|
|
|
|
|
Elasticsearch plugins must be upgraded when upgrading a node. Use the
|
|
|
|
`elasticsearch-plugin` script to install the correct version of any plugins
|
|
|
|
that you need.
|
|
|
|
|
|
|
|
==== Step 5: Start the upgraded node
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Start the now upgraded node and confirm that it joins the cluster by checking
|
|
|
|
the log file or by checking the output of this request:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/nodes
|
|
|
|
--------------------------------------------------
|
|
|
|
// AUTOSENSE
|
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
==== Step 6: Reenable shard allocation
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
Once the node has joined the cluster, reenable shard allocation to start using
|
|
|
|
the node:
|
|
|
|
|
2015-07-14 12:14:09 -04:00
|
|
|
[source,js]
|
2015-06-19 10:27:28 -04:00
|
|
|
--------------------------------------------------
|
|
|
|
PUT /_cluster/settings
|
|
|
|
{
|
|
|
|
"transient": {
|
|
|
|
"cluster.routing.allocation.enable": "all"
|
|
|
|
}
|
|
|
|
}
|
|
|
|
--------------------------------------------------
|
|
|
|
// AUTOSENSE
|
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
==== Step 7: Wait for the node to recover
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
You should wait for the cluster to finish shard allocation before upgrading
|
|
|
|
the next node. You can check on progress with the <<cat-health,`_cat/health`>>
|
|
|
|
request:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/health
|
|
|
|
--------------------------------------------------
|
|
|
|
// AUTOSENSE
|
|
|
|
|
|
|
|
Wait for the `status` column to move from `yellow` to `green`. Status `green`
|
|
|
|
means that all primary and replica shards have been allocated.
|
|
|
|
|
|
|
|
[IMPORTANT]
|
|
|
|
====================================================
|
|
|
|
During a rolling upgrade, primary shards assigned to a node with the higher
|
|
|
|
version will never have their replicas assigned to a node with the lower
|
|
|
|
version, because the newer version may have a different data format which is
|
|
|
|
not understood by the older version.
|
|
|
|
|
|
|
|
If it is not possible to assign the replica shards to another node with the
|
|
|
|
higher version -- e.g. if there is only one node with the higher version in
|
|
|
|
the cluster -- then the replica shards will remain unassigned and the
|
|
|
|
cluster health will remain status `yellow`.
|
|
|
|
|
|
|
|
In this case, check that there are no initializing or relocating shards (the
|
|
|
|
`init` and `relo` columns) before proceding.
|
|
|
|
|
|
|
|
As soon as another node is upgraded, the replicas should be assigned and the
|
|
|
|
cluster health will reach status `green`.
|
|
|
|
|
|
|
|
====================================================
|
|
|
|
|
2015-06-19 10:33:14 -04:00
|
|
|
Shards that have not been <<indices-synced-flush,sync-flushed>> may take some time to
|
2015-06-19 10:27:28 -04:00
|
|
|
recover. The recovery status of individual shards can be monitored with the
|
|
|
|
<<cat-recovery,`_cat/recovery`>> request:
|
|
|
|
|
|
|
|
[source,sh]
|
|
|
|
--------------------------------------------------
|
|
|
|
GET _cat/recovery
|
|
|
|
--------------------------------------------------
|
|
|
|
// AUTOSENSE
|
|
|
|
|
|
|
|
If you stopped indexing, then it is safe to resume indexing as soon as
|
|
|
|
recovery has completed.
|
|
|
|
|
2016-04-03 10:09:24 -04:00
|
|
|
==== Step 8: Repeat
|
2015-06-19 10:27:28 -04:00
|
|
|
|
|
|
|
When the cluster is stable and the node has recovered, repeat the above steps
|
|
|
|
for all remaining nodes.
|
|
|
|
|