WIP: Edits to upgrade docs (#26155)

* [DOCS] Updated and edited upgrade information.

* Incorporated Nik's feedback.
This commit is contained in:
debadair 2017-08-23 14:03:14 -07:00 committed by Deb Adair
parent bb5b771098
commit 220212dd69
12 changed files with 581 additions and 496 deletions

View File

@ -1,147 +0,0 @@
[[restart-upgrade]]
=== Full cluster restart upgrade
Elasticsearch requires a full cluster restart when upgrading across major
versions. Rolling upgrades are not supported across major versions. Consult
this <<setup-upgrade,table>> to verify that a full cluster restart is
required.
The process to perform an upgrade with a full cluster restart is as follows:
. *Disable shard allocation*
+
--
When you shut down a node, the allocation process will immediately try to
replicate the shards that were on that node to other nodes in the cluster,
causing a lot of wasted I/O. This can be avoided by disabling allocation
before shutting down a node:
[source,js]
--------------------------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}
--------------------------------------------------
// CONSOLE
// TEST[skip:indexes don't assign]
--
. *Perform a synced flush*
+
--
Shard recovery will be much faster if you stop indexing and issue a
<<indices-synced-flush, synced-flush>> request:
[source,sh]
--------------------------------------------------
POST _flush/synced
--------------------------------------------------
// CONSOLE
A synced flush request is a ``best effort'' operation. It will fail if there
are any pending indexing operations, but it is safe to reissue the request
multiple times if necessary.
--
. *Shutdown and upgrade all nodes*
+
--
Stop all Elasticsearch services on all nodes in the cluster. Each node can be
upgraded following the same procedure described in <<upgrade-node>>.
--
. *Upgrade any plugins*
+
--
Elasticsearch plugins must be upgraded when upgrading a node. Use the
`elasticsearch-plugin` script to install the correct version of any plugins
that you need.
--
. *Start the cluster*
+
--
If you have dedicated master nodes -- nodes with `node.master` set to
`true`(the default) and `node.data` set to `false` -- then it is a good idea
to start them first. Wait for them to form a cluster and to elect a master
before proceeding with the data nodes. You can check progress by looking at the
logs.
As soon as the <<master-election,minimum number of master-eligible nodes>>
have discovered each other, they will form a cluster and elect a master. From
that point on, the <<cat-health,`_cat/health`>> and <<cat-nodes,`_cat/nodes`>>
APIs can be used to monitor nodes joining the cluster:
[source,sh]
--------------------------------------------------
GET _cat/health
GET _cat/nodes
--------------------------------------------------
// CONSOLE
Use these APIs to check that all nodes have successfully joined the cluster.
--
. *Wait for yellow*
+
--
As soon as each node has joined the cluster, it will start to recover any
primary shards that are stored locally. Initially, the
<<cat-health,`_cat/health`>> request will report a `status` of `red`, meaning
that not all primary shards have been allocated.
Once each node has recovered its local shards, the `status` will become
`yellow`, meaning all primary shards have been recovered, but not all replica
shards are allocated. This is to be expected because allocation is still
disabled.
--
. *Reenable allocation*
+
--
Delaying the allocation of replicas until all nodes have joined the cluster
allows the master to allocate replicas to nodes which already have local shard
copies. At this point, with all the nodes in the cluster, it is safe to
reenable shard allocation:
[source,js]
------------------------------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}
------------------------------------------------------
// CONSOLE
The cluster will now start allocating replica shards to all data nodes. At this
point it is safe to resume indexing and searching, but your cluster will
recover more quickly if you can delay indexing and searching until all shards
have recovered.
You can monitor progress with the <<cat-health,`_cat/health`>> and
<<cat-recovery,`_cat/recovery`>> APIs:
[source,sh]
--------------------------------------------------
GET _cat/health
GET _cat/recovery
--------------------------------------------------
// CONSOLE
Once the `status` column in the `_cat/health` output has reached `green`, all
primary and replica shards have been successfully allocated.
--

View File

@ -1,106 +0,0 @@
[[reindex-upgrade]]
=== Reindex to upgrade
Elasticsearch is able to use indices created in the previous major version
only. For instance, Elasticsearch 6.x can use indices created in
Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before.
NOTE: Elasticsearch 6.x nodes will fail to start in the presence of too old indices.
If you are running an Elasticsearch 5.x cluster which contains indices that
were created before 5.x, you will either need to delete those old indices or
to reindex them before upgrading to 6.x. See <<reindex-upgrade-inplace>>.
If you are running an Elasticsearch 2.x cluster or older, you have two options:
* First upgrade to Elasticsearch 5.x, reindex the old indices, then upgrade
to 6.x. See <<reindex-upgrade-inplace>>.
* Create a new 6.x cluster and use reindex-from-remote to import indices
directly from the 2.x cluster. See <<reindex-upgrade-remote>>.
.Time-based indices and retention periods
*******************************************
For many use cases with time-based indices, you will not need to worry about
carrying old 2.x indices with you to 6.x. Data in time-based indices usually
becomes less interesting as time passes. Old indices can be deleted once they
fall outside of your retention period.
Users in this position can continue to use 5.x until all old 2.x indices have
been deleted, then upgrade to 6.x directly.
*******************************************
[[reindex-upgrade-inplace]]
==== Reindex in place
If you are running a 5.x cluster which contains indices created in
Elasticsearch 2.x, you will need to reindex (or delete) those indices before
upgrading to Elasticsearch 6.x.
The reindex process works as follows:
* Create a new index, copying the mappings and settings from the old index.
Set the `refresh_interval` to `-1` and the `number_of_replicas` to `0` for
efficient reindexing.
* Reindex all documents from the old index to the new index using the
<<docs-reindex,reindex API>>.
* Reset the `refresh_interval` and `number_of_replicas` to the values
used in the old index, and wait for the index to become green.
* In a single <<indices-aliases,update aliases>> request:
* Delete the old index.
* Add an alias with the old index name to the new index.
* Add any aliases that existed on the old index to the new index.
At the end of this process, you will have a new 5.x index which can be used
by an Elasticsearch 6.x cluster.
[[reindex-upgrade-remote]]
==== Upgrading with reindex-from-remote
If you are running a 1.x or 2.x cluster and would like to migrate directly to 6.x
without first migrating to 5.x, you can do so using
<<reindex-from-remote,reindex-from-remote>>.
[WARNING]
=============================================
Elasticsearch includes backwards compatibility code that allows indices from
the previous major version to be upgraded to the current major version. By
moving directly from Elasticsearch 2.x or before to 6.x, you will have to solve any
backwards compatibility issues yourself.
=============================================
You will need to set up a 6.x cluster alongside your existing old cluster.
The 6.x cluster needs to have access to the REST API of the old cluster.
For each old index that you want to transfer to the 6.x cluster, you will need
to:
* Create a new index in 6.x with the appropriate mappings and settings. Set
the `refresh_interval` to `-1` and set `number_of_replicas` to `0` for
faster reindexing.
* Use <<reindex-from-remote,reindex-from-remote>> to pull documents from the
old index into the new 6.x index.
* If you run the reindex job in the background (with `wait_for_completion` set
to `false`), the reindex request will return a `task_id` which can be used to
monitor progress of the reindex job in the <<tasks,task API>>:
`GET _tasks/TASK_ID`.
* Once reindex has completed, set the `refresh_interval` and
`number_of_replicas` to the desired values (the defaults are `30s` and `1`
respectively).
* Once the new index has finished replication, you can delete the old index.
The 6.x cluster can start out small, and you can gradually move nodes from the
old cluster to the 6.x cluster as you migrate indices across.

View File

@ -1,217 +0,0 @@
[[rolling-upgrades]]
=== Rolling upgrades
A rolling upgrade allows the Elasticsearch cluster to be upgraded one node at
a time, with no downtime for end users. Running multiple versions of
Elasticsearch in the same cluster for any length of time beyond that required
for an upgrade is not supported, as shards will not be replicated from the
more recent version to the older version.
Consult this <<setup-upgrade,table>> to verify that rolling upgrades are
supported for your version of Elasticsearch.
To perform a rolling upgrade:
. *Disable shard allocation*
+
--
When you shut down a node, the allocation process will wait for one minute
before starting to replicate the shards that were on that node to other nodes
in the cluster, causing a lot of wasted I/O. This can be avoided by disabling
allocation before shutting down a node:
[source,js]
--------------------------------------------------
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "none"
}
}
--------------------------------------------------
// CONSOLE
// TEST[skip:indexes don't assign]
--
. *Stop non-essential indexing and perform a synced flush (Optional)*
+
--
You may happily continue indexing during the upgrade. However, shard recovery
will be much faster if you temporarily stop non-essential indexing and issue a
<<indices-synced-flush, synced-flush>> request:
[source,js]
--------------------------------------------------
POST _flush/synced
--------------------------------------------------
// CONSOLE
A synced flush request is a ``best effort'' operation. It will fail if there
are any pending indexing operations, but it is safe to reissue the request
multiple times if necessary.
--
. [[upgrade-node]] *Stop and upgrade a single node*
+
--
Shut down one of the nodes in the cluster *before* starting the upgrade.
[TIP]
================================================
When using the zip or tarball packages, the `config`, `data`, `logs` and
`plugins` directories are placed within the Elasticsearch home directory by
default.
It is a good idea to place these directories in a different location so that
there is no chance of deleting them when upgrading Elasticsearch. These custom
paths can be <<path-settings,configured>> with the `ES_PATH_CONF` environment
variable, and the `path.logs`, and `path.data` settings.
The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the
appropriate place for each operating system.
================================================
To upgrade using a <<deb,Debian>> or <<rpm,RPM>> package:
* Use `rpm` or `dpkg` to install the new package. All files should be
placed in their proper locations, and config files should not be
overwritten.
To upgrade using a zip or compressed tarball:
* Extract the zip or tarball to a new directory, to be sure that you don't
overwrite the `config` or `data` directories.
* Either copy the files in the `config` directory from your old installation
to your new installation, or set the environment variable
<<config-files-location,`ES_PATH_CONF`>> to point to a custom config
directory.
* Either copy the files in the `data` directory from your old installation
to your new installation, or configure the location of the data directory
in the `config/elasticsearch.yml` file, with the `path.data` setting.
--
. *Upgrade any plugins*
+
--
Elasticsearch plugins must be upgraded when upgrading a node. Use the
`elasticsearch-plugin` script to install the correct version of any plugins
that you need.
--
. *Start the upgraded node*
+
--
Start the now upgraded node and confirm that it joins the cluster by checking
the log file or by checking the output of this request:
[source,sh]
--------------------------------------------------
GET _cat/nodes
--------------------------------------------------
// CONSOLE
--
. *Reenable shard allocation*
+
--
Once the node has joined the cluster, reenable shard allocation to start using
the node:
[source,js]
--------------------------------------------------
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}
--------------------------------------------------
// CONSOLE
--
. *Wait for the node to recover*
+
--
You should wait for the cluster to finish shard allocation before upgrading
the next node. You can check on progress with the <<cat-health,`_cat/health`>>
request:
[source,sh]
--------------------------------------------------
GET _cat/health
--------------------------------------------------
// CONSOLE
Wait for the `status` column to move from `yellow` to `green`. Status `green`
means that all primary and replica shards have been allocated.
[IMPORTANT]
====================================================
During a rolling upgrade, primary shards assigned to a node with the higher
version will never have their replicas assigned to a node with the lower
version, because the newer version may have a different data format which is
not understood by the older version.
If it is not possible to assign the replica shards to another node with the
higher version -- e.g. if there is only one node with the higher version in
the cluster -- then the replica shards will remain unassigned and the
cluster health will remain status `yellow`.
In this case, check that there are no initializing or relocating shards (the
`init` and `relo` columns) before proceding.
As soon as another node is upgraded, the replicas should be assigned and the
cluster health will reach status `green`.
====================================================
Shards that have not been <<indices-synced-flush,sync-flushed>> may take some time to
recover. The recovery status of individual shards can be monitored with the
<<cat-recovery,`_cat/recovery`>> request:
[source,sh]
--------------------------------------------------
GET _cat/recovery
--------------------------------------------------
// CONSOLE
If you stopped indexing, then it is safe to resume indexing as soon as
recovery has completed.
--
. *Repeat*
+
--
When the cluster is stable and the node has recovered, repeat the above steps
for all remaining nodes.
--
[IMPORTANT]
====================================================
During a rolling upgrade the cluster will continue to operate as normal. Any
new functionality will be disabled or work in a backward compatible manner
until all nodes of the cluster have been upgraded. Once the upgrade is
completed and all nodes are on the new version, the new functionality will
become operational. Once that has happened, it is practically impossible to
go back to operating in a backward compatible mode. To protect against such a
scenario, nodes from the previous major version (e.g. 5.x) will not be allowed
to join a cluster where all nodes are of a higher major version (e.g. 6.x).
In the unlikely case of a network malfunction during upgrades, where all
remaining old nodes are isolated from the cluster, you will have to take all
old nodes offline and upgrade them before they can rejoin the cluster.
====================================================

View File

@ -5,51 +5,61 @@
===========================================
Before upgrading Elasticsearch:
* Consult the <<breaking-changes,breaking changes>> docs.
* Review the <<breaking-changes,breaking changes>> for changes that
affect your application.
* Check the <<deprecation-logging, deprecation log>> to see if you are using
any deprecated features.
* If you use custom plugins, make sure compatible versions are available.
* Test upgrades in a dev environment before upgrading your production cluster.
* Always <<modules-snapshots,back up your data>> before upgrading.
You **cannot roll back** to an earlier version unless you have a backup of your data.
* If you are using custom plugins, check that a compatible version is available.
* <<modules-snapshots,Back up your data>> before upgrading.
You **cannot roll back** to an earlier version unless you have a backup of
your data.
===========================================
Elasticsearch can usually be upgraded using a rolling upgrade process,
resulting in no interruption of service. This section details how to perform
both rolling upgrades and upgrades with full cluster restarts.
Elasticsearch can usually be upgraded using a <<rolling-upgrades,Rolling upgrade>>
process so upgrading does not interrupt service. However, you might
need to <<reindex-upgrade,Reindex to upgrade>> indices created in older versions.
Upgrades across major versions prior to 6.0 require a <<restart-upgrade,Full cluster restart>>.
To determine whether a rolling upgrade is supported for your release, please
consult this table:
The following table shows when you can perform a rolling upgrade, when you
need to reindex or delete old indices, and when a full cluster restart is
required.
[[upgrade-paths]]
[cols="1<m,1<m,3",options="header",]
|=======================================================================
|Upgrade From |Upgrade To |Supported Upgrade Type
|< 5.x |6.x |<<reindex-upgrade,Reindex to upgrade>>
|5.x |5.y |<<rolling-upgrades,Rolling upgrade>> (where `y > x`)
|5.x |6.x |<<restart-upgrade,Full cluster restart>>
|6.0.0 pre GA |6.x |<<restart-upgrade,Full cluster restart>>
|6.x |6.y |<<rolling-upgrades,Rolling upgrade>> (where `y > x`)
|5.6 |6.x |<<rolling-upgrades,Rolling upgrade>> footnoteref:[reindexfn, You must delete or reindex any indices created in 2.x before upgrading.]
|5.0-5.5 |6.x |<<restart-upgrade,Full cluster restart>> footnoteref:[reindexfn]
|<5.x |6.x |<<reindex-upgrade,Reindex to upgrade>>
|6.x |6.y |<<rolling-upgrades,Rolling upgrade>> (where `y > x`) footnote:[Upgrading from a 6.0.0 pre GA version requires a full cluster restart.]
|=======================================================================
[IMPORTANT]
.Indices created in Elasticsearch 2.x or before
===============================================
Elasticsearch is able to read indices created in the *previous major version
only*. For instance, Elasticsearch 6.x can use indices created in
Elasticsearch 5.x, but not those created in Elasticsearch 2.x or before.
Elasticsearch can read indices created in the *previous major version*.
Older indices must be reindexed or deleted. Elasticsearch 6.x
can use indices created in Elasticsearch 5.x, but not those created in
Elasticsearch 2.x or before. Elasticsearch 5.x can use indices created in
Elasticsearch 2.x, but not those created in 1.x or before.
This condition also applies to indices backed up with
<<modules-snapshots,snapshot and restore>>. If an index was originally
created in 2.x, it cannot be restored into a 6.x cluster even if the
snapshot was made by a 5.x cluster.
This also applies to indices backed up with <<modules-snapshots,snapshot
and restore>>. If an index was originally created in 2.x, it cannot be
restored to a 6.x cluster even if the snapshot was created by a 2.x cluster.
Elasticsearch 6.x nodes will fail to start in the presence of too old indices.
Elasticsearch nodes will fail to start if incompatible indices are present.
For information about how to upgrade old indices, see <<reindex-upgrade,
Reindex to upgrade>>.
See <<reindex-upgrade>> for more information about how to upgrade old indices.
===============================================
include::rolling_upgrade.asciidoc[]
include::upgrade/rolling_upgrade.asciidoc[]
include::cluster_restart.asciidoc[]
include::upgrade/cluster_restart.asciidoc[]
include::reindex_upgrade.asciidoc[]
include::upgrade/reindex_upgrade.asciidoc[]

View File

@ -0,0 +1,120 @@
[[restart-upgrade]]
=== Full cluster restart upgrade
A full cluster restart upgrade requires that you shut all nodes in the cluster
down, upgrade them, and restart the cluster. A full cluster restart was required
when upgrading to major versions prior to 6.x. Elasticsearch 6.x supports
<<rolling-upgrades, rolling upgrades>> from *Elasticsearch 5.6*. Upgrading to
6.x from earlier versions requires a full cluster restart. See the
<<upgrade-paths,Upgrade paths table>> to verify the type of upgrade you need
to perform.
To perform a full cluster restart upgrade:
. *Disable shard allocation.*
+
--
include::disable-shard-alloc.asciidoc[]
--
. *Stop indexing and perform a synced flush.*
+
--
Performing a <<indices-synced-flush, synced-flush>> speeds up shard
recovery.
include::synced-flush.asciidoc[]
--
. *Shutdown all nodes.*
+
--
include::shut-down-node.asciidoc[]
--
. *Upgrade all nodes.*
+
--
include::upgrade-node.asciidoc[]
include::set-paths-tip.asciidoc[]
--
. *Upgrade any plugins.*
+
Use the `elasticsearch-plugin` script to install the upgraded version of each
installed Elasticsearch plugin. All plugins must be upgraded when you upgrade
a node.
. *Start each upgraded node.*
+
--
If you have dedicated master nodes, start them first and wait for them to
form a cluster and elect a master before proceeding with your data nodes.
You can check progress by looking at the logs.
As soon as the <<master-election,minimum number of master-eligible nodes>>
have discovered each other, they form a cluster and elect a master. At
that point, you can use <<cat-health,`_cat/health`>> and
<<cat-nodes,`_cat/nodes`>> to monitor nodes joining the cluster:
[source,sh]
--------------------------------------------------
GET _cat/health
GET _cat/nodes
--------------------------------------------------
// CONSOLE
The `status` column returned by `_cat/health` shows the health of each node
in the cluster: `red`, `yellow`, or `green`.
--
. *Wait for all nodes to join the cluster and report a status of yellow.*
+
--
When a node joins the cluster, it begins to recover any primary shards that
are stored locally. The <<cat-health,`_cat/health`>> API initially reports
a `status` of `red`, indicating that not all primary shards have been allocated.
Once a node recovers its local shards, the cluster `status` switches to `yellow`,
indicating that all primary shards have been recovered, but not all replica
shards are allocated. This is to be expected because you have not yet
reenabled allocation. Delaying the allocation of replicas until all nodes
are `yellow` allows the master to allocate replicas to nodes that
already have local shard copies.
--
. *Reenable allocation.*
+
--
When all nodes have joined the cluster and recovered their primary shards,
reenable allocation.
[source,js]
------------------------------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "all"
}
}
------------------------------------------------------
// CONSOLE
Once allocation is reenabled, the cluster starts allocating replica shards to
the data nodes. At this point it is safe to resume indexing and searching,
but your cluster will recover more quickly if you can wait until all primary
and replica shards have been successfully allocated and the status of all nodes
is `green`.
You can monitor progress with the <<cat-health,`_cat/health`>> and
<<cat-recovery,`_cat/recovery`>> APIs:
[source,sh]
--------------------------------------------------
GET _cat/health
GET _cat/recovery
--------------------------------------------------
// CONSOLE
--

View File

@ -0,0 +1,17 @@
When you shut down a node, the allocation process waits for one minute
before starting to replicate the shards on that node to other nodes
in the cluster, causing a lot of wasted I/O. You can avoid racing the clock
by disabling allocation before shutting down the node:
[source,js]
--------------------------------------------------
PUT _cluster/settings
{
"persistent": {
"cluster.routing.allocation.enable": "none"
}
}
--------------------------------------------------
// CONSOLE
// TEST[skip:indexes don't assign]

View File

@ -0,0 +1,177 @@
[[reindex-upgrade]]
=== Reindex before upgrading
Elasticsearch can read indices created in the *previous major version*.
Older indices must be reindexed or deleted. Elasticsearch 6.x
can use indices created in Elasticsearch 5.x, but not those created in
Elasticsearch 2.x or before. Elasticsearch 5.x can use indices created in
Elasticsearch 2.x, but not those created in 1.x or before.
Elasticsearch nodes will fail to start if incompatible indices are present.
To upgrade an Elasticsearch 5.x cluster that contains indices created in 2.x,
you must reindex or delete them before upgrading to 6.x.
For more information, see <<reindex-upgrade-inplace, Reindex in place>>.
To upgrade an Elasticsearch cluster running 2.x, you have two options:
* Perform a <<restart-upgrade, full cluster restart upgrade>> to 5.6,
<<reindex-upgrade-inplace, reindex>> the 2.x indices, then perform a
<<rolling-upgrades, rolling upgrade>> to 6.x. If your Elasticsearch 2.x
cluster contains indices that were created before 2.x, you must either
delete or reindex them before upgrading to 5.6. For more information about
upgrading from 2.x to 5.6, see https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup-upgrade.html[
Upgrading Elasticsearch] in the Elasticsearch 5.6 Reference.
* Create a new 6.x cluster and <<reindex-upgrade-remote, reindex from
remote>> to import indices directly from the 2.x cluster.
To upgrade an Elasticsearch 1.x cluster, you have two options:
* Perform a <<restart-upgrade, full cluster restart upgrade>> to Elasticsearch
2.4.x and <<reindex-upgrade-inplace, reindex>> or delete the 1.x indices.
Then, perform a full cluster restart upgrade to 5.6 and reindex or delete
the 2.x indices. Finally, perform a <<rolling-upgrades, rolling upgrade>>
to 6.x. For more information about upgrading from 1.x to 2.4, see https://www.elastic.co/guide/en/elasticsearch/reference/2.4/setup-upgrade.html[
Upgrading Elasticsearch] in the Elasticsearch 2.4 Reference.
For more information about upgrading from 2.4 to 5.6, see https://www.elastic.co/guide/en/elasticsearch/reference/5.6/setup-upgrade.html[
Upgrading Elasticsearch] in the Elasticsearch 5.6 Reference.
* Create a new 6.x cluster and <<reindex-upgrade-remote, reindex from
remote>> to import indices directly from the 1.x cluster.
.Upgrading time-based indices
*******************************************
If you use time-based indices, you likely won't need to carry
pre-5.x indices forward to 6.x. Data in time-based indices
generally becomes less useful as time passes and are
deleted as they age past your retention period.
Unless you have an unusally long retention period, you can just
wait to upgrade to 6.x until all of your pre-5.x indices have
been deleted.
*******************************************
[[reindex-upgrade-inplace]]
==== Reindex in place
To manually reindex your old indices with the <<docs-reindex,`reindex` API>>:
. Create a new index and copy the mappings and settings from the old index.
. Set the `refresh_interval` to `-1` and the `number_of_replicas` to `0` for
efficient reindexing.
. Reindex all documents from the old index into the new index using the
<<docs-reindex,reindex API>>.
. Reset the `refresh_interval` and `number_of_replicas` to the values
used in the old index.
. Wait for the index status to change to `green`.
. In a single <<indices-aliases,update aliases>> request:
.. Delete the old index.
.. Add an alias with the old index name to the new index.
.. Add any aliases that existed on the old index to the new index.
// Flag this as X-Pack and conditionally include at GA.
// Need to update the CSS to override sidebar titles.
[role="xpack"]
.Migration assistance and upgrade tools
*******************************************
{xpack} 5.6 provides migration assistance and upgrade tools that simplify
reindexing and upgrading to 6.x. These tools are free with the X-Pack trial
and Basic licenses and you can use them to upgrade whether or not X-Pack is a
regular part of your Elastic Stack. For more information, see
{stack-guide}/upgrading-elastic-stack.html.
*******************************************
[[reindex-upgrade-remote]]
==== Reindex from a remote cluster
You can use <<reindex-from-remote,reindex from remote>> to migrate indices from
your old cluster to a new 6.x cluster. This enables you move to 6.x from a
pre-5.6 cluster without interrupting service.
[WARNING]
=============================================
Elasticsearch provides backwards compatibility support that enables
indices from the previous major version to be upgraded to the
current major version. Skipping a major version means that you must
resolve any backward compatibility issues yourself.
=============================================
To migrate your indices:
. Set up a new 6.x cluster alongside your old cluster. Enable it to access
your old cluster by adding your old cluster to the `reindex.remote.whitelist` in `elasticsearch.yml`:
+
--
[source,yaml]
--------------------------------------------------
reindex.remote.whitelist: oldhost:9200
--------------------------------------------------
[NOTE]
=============================================
The new cluster doesn't have to start fully-scaled out. As you migrate
indices and shift the load to the new cluster, you can add nodes to the new
cluster and remove nodes from the old one.
=============================================
--
. For each index that you need to migrate to the 6.x cluster:
.. Create a new index in 6.x with the appropriate mappings and settings. Set the
`refresh_interval` to `-1` and set `number_of_replicas` to `0` for
faster reindexing.
.. <<reindex-from-remote,Reindex from remote>> to pull documents from the
old index into the new 6.x index:
+
--
[source,js]
--------------------------------------------------
POST _reindex
{
"source": {
"remote": {
"host": "http://oldhost:9200",
"username": "user",
"password": "pass"
},
"index": "source",
"query": {
"match": {
"test": "data"
}
}
},
"dest": {
"index": "dest"
}
}
--------------------------------------------------
// CONSOLE
// TEST[setup:host]
// TEST[s/^/PUT source\n/]
// TEST[s/otherhost:9200",/\${host}"/]
// TEST[s/"username": "user",//]
// TEST[s/"password": "pass"//]
If you run the reindex job in the background by setting `wait_for_completion`
to `false`, the reindex request returns a `task_id` you can use to
monitor progress of the reindex job with the <<tasks,task API>>:
`GET _tasks/TASK_ID`.
--
.. When the reindex job completes, set the `refresh_interval` and
`number_of_replicas` to the desired values (the default settings are
`30s` and `1`).
.. Once replication is complete and the status of the new index is `green`,
you can delete the old index.

View File

@ -0,0 +1,159 @@
[[rolling-upgrades]]
=== Rolling upgrades
A rolling upgrade allows an Elasticsearch cluster to be upgraded one node at
a time so upgrading does not interrupt service. Running multiple versions of
Elasticsearch in the same cluster beyond the duration of an upgrade is
not supported, as shards cannot be replicated from upgraded nodes to nodes
running the older version.
Rolling upgrades can be performed between minor versions. Elasticsearch
6.x supports rolling upgrades from *Elasticsearch 5.6*.
Upgrading from earlier 5.x versions requires a <<restart-upgrade,
full cluster restart>>. You must <<reindex-upgrade,reindex to upgrade>> from
versions prior to 5.x.
To perform a rolling upgrade:
. *Disable shard allocation*.
+
--
include::disable-shard-alloc.asciidoc[]
--
. *Stop non-essential indexing and perform a synced flush.* (Optional)
+
--
While you can continue indexing during the upgrade, shard recovery
is much faster if you temporarily stop non-essential indexing and perform a
<<indices-synced-flush, synced-flush>>.
include::synced-flush.asciidoc[]
--
. [[upgrade-node]] *Shut down a single node*.
+
--
include::shut-down-node.asciidoc[]
--
. *Upgrade the node you shut down.*
+
--
include::upgrade-node.asciidoc[]
include::set-paths-tip.asciidoc[]
--
. *Upgrade any plugins.*
+
Use the `elasticsearch-plugin` script to install the upgraded version of each
installed Elasticsearch plugin. All plugins must be upgraded when you upgrade
a node.
. *Start the upgraded node.*
+
--
Start the newly-upgraded node and confirm that it joins the cluster by checking
the log file or by submitting a `_cat/nodes` request:
[source,sh]
--------------------------------------------------
GET _cat/nodes
--------------------------------------------------
// CONSOLE
--
. *Reenable shard allocation.*
+
--
Once the node has joined the cluster, reenable shard allocation to start using
the node:
[source,js]
--------------------------------------------------
PUT _cluster/settings
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}
--------------------------------------------------
// CONSOLE
--
. *Wait for the node to recover.*
+
--
Before upgrading the next node, wait for the cluster to finish shard allocation.
You can check progress by submitting a <<cat-health,`_cat/health`>> request:
[source,sh]
--------------------------------------------------
GET _cat/health
--------------------------------------------------
// CONSOLE
Wait for the `status` column to switch from `yellow` to `green`. Once the
node is `green`, all primary and replica shards have been allocated.
[IMPORTANT]
====================================================
During a rolling upgrade, primary shards assigned to a node running the new
version cannot have their replicas assigned to a node with the old
version. The new version might have a different data format that is
not understood by the old version.
If it is not possible to assign the replica shards to another node
(there is only one upgraded node in the cluster), the replica
shards remain unassigned and status stays `yellow`.
In this case, you can proceed once there are no initializing or relocating shards
(check the `init` and `relo` columns).
As soon as another node is upgraded, the replicas can be assigned and the
status will change to `green`.
====================================================
Shards that were not <<indices-synced-flush,sync-flushed>> might take longer to
recover. You can monitor the recovery status of individual shards by
submitting a <<cat-recovery,`_cat/recovery`>> request:
[source,sh]
--------------------------------------------------
GET _cat/recovery
--------------------------------------------------
// CONSOLE
If you stopped indexing, it is safe to resume indexing as soon as
recovery completes.
--
. *Repeat*
+
--
When the node has recovered and the cluster is stable, repeat these steps
for each node that needs to be updated.
--
[IMPORTANT]
====================================================
During a rolling upgrade, the cluster continues to operate normally. However,
any new functionality is disabled or operates in a backward compatible mode
until all nodes in the cluster are upgraded. New functionality
becomes operational once the upgrade is complete and all nodes are running the
new version. Once that has happened, there's no way to return to operating
in a backward compatible mode. Nodes running the previous major version will
not be allowed to join the fully-updated cluster.
In the unlikely case of a network malfunction during the upgrade process that
isolates all remaining old nodes from the cluster, you must take the
old nodes offline and upgrade them to enable them to join the cluster.
====================================================

View File

@ -0,0 +1,18 @@
[TIP]
================================================
When you extract the zip or tarball packages, the `elasticsearch-n.n.n`
directory contains the Elasticsearh `config`, `data`, `logs` and
`plugins` directories.
We recommend moving these directories out of the Elasticsearch directory
so that there is no chance of deleting them when you upgrade Elasticsearch.
To specify the new locations, use the `ES_PATH_CONF` environment
variable and the `path.data` and `path.logs` settings. For more information,
see <<important-settings,Important Elasticsearch configuration>>.
The <<deb,Debian>> and <<rpm,RPM>> packages place these directories in the
appropriate place for each operating system. In production, we recommend
installing using the deb or rpm package.
================================================

View File

@ -0,0 +1,20 @@
* If you are running Elasticsearch with `systemd`:
+
[source,sh]
--------------------------------------------------
sudo systemctl stop elasticsearch.service
--------------------------------------------------
* If you are running Elasticsearch with SysV `init`:
+
[source,sh]
--------------------------------------------------
sudo -i service elasticsearch stop
--------------------------------------------------
* If you are running Elasticsearch as a daemon:
+
[source,sh]
--------------------------------------------------
kill $(cat pid)
--------------------------------------------------

View File

@ -0,0 +1,11 @@
[source,sh]
--------------------------------------------------
POST _flush/synced
--------------------------------------------------
// CONSOLE
When you perform a synced flush, check the response to make sure there are
no failures. Synced flush operations that fail due to pending indexing
operations are listed in the response body, although the request itself
still returns a 200 OK status. If there are failures, reissue the request.

View File

@ -0,0 +1,23 @@
To upgrade using a <<deb,Debian>> or <<rpm,RPM>> package:
* Use `rpm` or `dpkg` to install the new package. All files are
installed in the appropriate location for the operating system
and Elasticsearch config files are not overwritten.
To upgrade using a zip or compressed tarball:
.. Extract the zip or tarball to a _new_ directory. This is critical if you
are not using external `config` and `data` directories.
.. Set the the `ES_PATH_CONF` environment variable to specify the location of
your external `config` directory and `jvm.options` file. If you are not
using an external `config` directory, copy your old configuration
over to the new installation.
.. Set `path.data` in `config/elasticsearch.yml` to point to your external
data directory. If you are not using an external `data` directory, copy
your old data directory over to the new installation.
.. Set `path.logs` in `config/elasticsearch.yml` to point to the location
where you want to store your logs. If you do not specify this setting,
logs are stored in the directory you extracted the archive to.