[[restart-upgrade]] == Full cluster restart upgrade To upgrade directly to {es} {version} from versions 6.0-6.6, you must shut down all nodes in the cluster, upgrade each node to {version}, and restart the cluster. NOTE: If you are running a version prior to 6.0, https://www.elastic.co/guide/en/elastic-stack/6.7/upgrading-elastic-stack.html[upgrade to 6.7] and reindex your old indices or bring up a new {version} cluster and <>. To perform a full cluster restart upgrade to {version}: . *Disable shard allocation.* + -- include::disable-shard-alloc.asciidoc[] -- . *Stop indexing and perform a synced flush.* + -- Performing a <> speeds up shard recovery. include::synced-flush.asciidoc[] -- . *Stop any machine learning jobs that are running.* + -- include::close-ml.asciidoc[] -- . *Shutdown all nodes.* + -- include::shut-down-node.asciidoc[] -- . *Upgrade all nodes.* + -- include::remove-xpack.asciidoc[] -- + -- include::upgrade-node.asciidoc[] include::set-paths-tip.asciidoc[] -- . *Upgrade any plugins.* + Use the `elasticsearch-plugin` script to install the upgraded version of each installed {es} plugin. All plugins must be upgraded when you upgrade a node. . If you use {es} {security-features} to define realms, verify that your realm settings are up-to-date. The format of realm settings changed in version 7.0, in particular, the placement of the realm type changed. See <>. . *Start each upgraded node.* + -- If you have dedicated master nodes, start them first and wait for them to form a cluster and elect a master before proceeding with your data nodes. You can check progress by looking at the logs. If upgrading from a 6.x cluster, you must <> by setting the `cluster.initial_master_nodes` setting. As soon as enough master-eligible nodes have discovered each other, they form a cluster and elect a master. At that point, you can use <> and <> to monitor nodes joining the cluster: [source,sh] -------------------------------------------------- GET _cat/health GET _cat/nodes -------------------------------------------------- // CONSOLE The `status` column returned by `_cat/health` shows the health of each node in the cluster: `red`, `yellow`, or `green`. -- . *Wait for all nodes to join the cluster and report a status of yellow.* + -- When a node joins the cluster, it begins to recover any primary shards that are stored locally. The <> API initially reports a `status` of `red`, indicating that not all primary shards have been allocated. Once a node recovers its local shards, the cluster `status` switches to `yellow`, indicating that all primary shards have been recovered, but not all replica shards are allocated. This is to be expected because you have not yet reenabled allocation. Delaying the allocation of replicas until all nodes are `yellow` allows the master to allocate replicas to nodes that already have local shard copies. -- . *Reenable allocation.* + -- When all nodes have joined the cluster and recovered their primary shards, reenable allocation by restoring `cluster.routing.allocation.enable` to its default: [source,js] ------------------------------------------------------ PUT _cluster/settings { "persistent": { "cluster.routing.allocation.enable": null } } ------------------------------------------------------ // CONSOLE Once allocation is reenabled, the cluster starts allocating replica shards to the data nodes. At this point it is safe to resume indexing and searching, but your cluster will recover more quickly if you can wait until all primary and replica shards have been successfully allocated and the status of all nodes is `green`. You can monitor progress with the <> and <> APIs: [source,sh] -------------------------------------------------- GET _cat/health GET _cat/recovery -------------------------------------------------- // CONSOLE -- . *Restart machine learning jobs.* + -- include::open-ml.asciidoc[] --