2013-11-08 19:20:43 -05:00
[[modules-snapshots]]
== Snapshot And Restore
The snapshot and restore module allows to create snapshots of individual indices or an entire cluster into a remote
2014-04-03 21:29:51 -04:00
repository. At the time of the initial release only shared file system repository was supported, but now a range of
backends are available via officially supported repository plugins.
2013-11-08 19:20:43 -05:00
[float]
=== Repositories
Before any snapshot or restore operation can be performed a snapshot repository should be registered in
Elasticsearch. The following command registers a shared file system repository with the name `my_backup` that
will use location `/mount/backups/my_backup` to store snapshots.
[source,js]
-----------------------------------
$ curl -XPUT 'http://localhost:9200/_snapshot/my_backup' -d '{
"type": "fs",
"settings": {
"location": "/mount/backups/my_backup",
"compress": true
}
}'
-----------------------------------
Once repository is registered, its information can be obtained using the following command:
[source,js]
-----------------------------------
$ curl -XGET 'http://localhost:9200/_snapshot/my_backup?pretty'
-----------------------------------
[source,js]
-----------------------------------
{
"my_backup" : {
"type" : "fs",
"settings" : {
2014-04-16 17:28:43 -04:00
"compress" : "true",
2013-11-08 19:20:43 -05:00
"location" : "/mount/backups/my_backup"
}
}
}
-----------------------------------
If a repository name is not specified, or `_all` is used as repository name Elasticsearch will return information about
all repositories currently registered in the cluster:
[source,js]
-----------------------------------
$ curl -XGET 'http://localhost:9200/_snapshot'
-----------------------------------
or
[source,js]
-----------------------------------
2014-01-07 14:03:12 -05:00
$ curl -XGET 'http://localhost:9200/_snapshot/_all'
2013-11-08 19:20:43 -05:00
-----------------------------------
[float]
===== Shared File System Repository
The shared file system repository (`"type": "fs"`) is using shared file system to store snapshot. The path
specified in the `location` parameter should point to the same location in the shared filesystem and be accessible
on all data and master nodes. The following settings are supported:
[horizontal]
`location`:: Location of the snapshots. Mandatory.
2014-09-09 00:17:38 -04:00
`compress`:: Turns on compression of the snapshot files. Compression is applied only to metadata files (index mapping and settings). Data files are not compressed. Defaults to `true`.
2013-11-08 19:20:43 -05:00
`chunk_size`:: Big files can be broken down into chunks during snapshotting if needed. The chunk size can be specified in bytes or by
using size value notation, i.e. 1g, 10m, 5k. Defaults to `null` (unlimited chunk size).
2014-01-29 07:33:35 -05:00
`max_restore_bytes_per_sec`:: Throttles per node restore rate. Defaults to `20mb` per second.
`max_snapshot_bytes_per_sec`:: Throttles per node snapshot rate. Defaults to `20mb` per second.
2014-09-10 21:49:15 -04:00
`verify`:: Verify repository upon creation. Defaults to `true`.
2013-11-08 19:20:43 -05:00
2014-01-15 12:57:05 -05:00
[float]
===== Read-only URL Repository
The URL repository (`"type": "url"`) can be used as an alternative read-only way to access data created by shared file
system repository is using shared file system to store snapshot. The URL specified in the `url` parameter should
point to the root of the shared filesystem repository. The following settings are supported:
[horizontal]
`url`:: Location of the snapshots. Mandatory.
2014-02-05 11:43:14 -05:00
[float]
===== Repository plugins
Other repository backends are available in these official plugins:
* https://github.com/elasticsearch/elasticsearch-cloud-aws#s3-repository[AWS Cloud Plugin] for S3 repositories
2014-02-06 11:40:13 -05:00
* https://github.com/elasticsearch/elasticsearch-hadoop/tree/master/repository-hdfs[HDFS Plugin] for Hadoop environments
2014-03-17 14:40:28 -04:00
* https://github.com/elasticsearch/elasticsearch-cloud-azure#azure-repository[Azure Cloud Plugin] for Azure storage repositories
2014-02-05 11:43:14 -05:00
2014-10-07 12:33:21 -04:00
[float]
2014-09-10 21:49:15 -04:00
===== Repository Verification
added[1.4.0]
When repository is registered, it's immediately verified on all master and data nodes to make sure that it's functional
on all nodes currently present in the cluster. The verification process can also be executed manually by running the
following command:
[source,js]
-----------------------------------
$ curl -XPOST 'http://localhost:9200/_snapshot/my_backup/_verify'
-----------------------------------
2014-10-07 12:33:21 -04:00
It returns a list of nodes where repository was successfully verified or an error message if verification process failed.
2014-09-10 21:49:15 -04:00
2013-11-08 19:20:43 -05:00
[float]
=== Snapshot
A repository can contain multiple snapshots of the same cluster. Snapshot are identified by unique names within the
cluster. A snapshot with the name `snapshot_1` in the repository `my_backup` can be created by executing the following
command:
[source,js]
-----------------------------------
$ curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_1?wait_for_completion=true"
-----------------------------------
2014-10-08 08:43:06 -04:00
The `wait_for_completion` parameter specifies whether or not the request should return immediately after snapshot
initialization (default) or wait for snapshot completion. During snapshot initialization, information about all
previous snapshots is loaded into the memory, which means that in large repositories it may take several seconds (or
even minutes) for this command to return even if the `wait_for_completion` parameter is set to `false`.
By default snapshot of all open and started indices in the cluster is created. This behavior can be changed by
specifying the list of indices in the body of the snapshot request.
2013-11-08 19:20:43 -05:00
[source,js]
-----------------------------------
$ curl -XPUT "localhost:9200/_snapshot/my_backup/snapshot_1" -d '{
"indices": "index_1,index_2",
2013-12-11 18:30:12 -05:00
"ignore_unavailable": "true",
2013-11-08 19:20:43 -05:00
"include_global_state": false
}'
-----------------------------------
The list of indices that should be included into the snapshot can be specified using the `indices` parameter that
supports <<search-multi-index-type,multi index syntax>>. The snapshot request also supports the
2014-04-03 19:54:37 -04:00
`ignore_unavailable` option. Setting it to `true` will cause indices that do not exist to be ignored during snapshot
2013-12-11 18:30:12 -05:00
creation. By default, when `ignore_unavailable` option is not set and an index is missing the snapshot request will fail.
2013-11-08 19:20:43 -05:00
By setting `include_global_state` to false it's possible to prevent the cluster global state to be stored as part of
2014-01-13 11:22:13 -05:00
the snapshot. By default, entire snapshot will fail if one or more indices participating in the snapshot don't have
all primary shards available. This behaviour can be changed by setting `partial` to `true`.
2013-11-08 19:20:43 -05:00
The index snapshot process is incremental. In the process of making the index snapshot Elasticsearch analyses
the list of the index files that are already stored in the repository and copies only files that were created or
changed since the last snapshot. That allows multiple snapshots to be preserved in the repository in a compact form.
Snapshotting process is executed in non-blocking fashion. All indexing and searching operation can continue to be
executed against the index that is being snapshotted. However, a snapshot represents the point-in-time view of the index
at the moment when snapshot was created, so no records that were added to the index after snapshot process had started
2014-05-12 17:19:10 -04:00
will be present in the snapshot. The snapshot process starts immediately for the primary shards that has been started
2014-07-17 15:25:34 -04:00
and are not relocating at the moment. Before version 1.2.0, the snapshot operation fails if the cluster has any relocating or
2014-05-12 17:19:10 -04:00
initializing primaries of indices participating in the snapshot. Starting with version 1.2.0, Elasticsearch waits for
2014-07-17 15:25:34 -04:00
relocation or initialization of shards to complete before snapshotting them.
2013-11-08 19:20:43 -05:00
Besides creating a copy of each index the snapshot process can also store global cluster metadata, which includes persistent
cluster settings and templates. The transient settings and registered snapshot repositories are not stored as part of
the snapshot.
Only one snapshot process can be executed in the cluster at any time. While snapshot of a particular shard is being
created this shard cannot be moved to another node, which can interfere with rebalancing process and allocation
filtering. Once snapshot of the shard is finished Elasticsearch will be able to move shard to another node according
to the current allocation filtering settings and rebalancing algorithm.
Once a snapshot is created information about this snapshot can be obtained using the following command:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1"
-----------------------------------
All snapshots currently stored in the repository can be listed using the following command:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/_all"
-----------------------------------
A snapshot can be deleted from the repository using the following command:
[source,shell]
-----------------------------------
$ curl -XDELETE "localhost:9200/_snapshot/my_backup/snapshot_1"
-----------------------------------
When a snapshot is deleted from a repository, Elasticsearch deletes all files that are associated with the deleted
snapshot and not used by any other snapshots. If the deleted snapshot operation is executed while the snapshot is being
created the snapshotting process will be aborted and all files created as part of the snapshotting process will be
cleaned. Therefore, the delete snapshot operation can be used to cancel long running snapshot operations that were
started by mistake.
[float]
=== Restore
2014-03-15 18:58:18 -04:00
A snapshot can be restored using the following command:
2013-11-08 19:20:43 -05:00
[source,shell]
-----------------------------------
$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore"
-----------------------------------
By default, all indices in the snapshot as well as cluster state are restored. It's possible to select indices that
should be restored as well as prevent global cluster state from being restored by using `indices` and
`include_global_state` options in the restore request body. The list of indices supports
<<search-multi-index-type,multi index syntax>>. The `rename_pattern` and `rename_replacement` options can be also used to
rename index on restore using regular expression that supports referencing the original text as explained
http://docs.oracle.com/javase/6/docs/api/java/util/regex/Matcher.html#appendReplacement(java.lang.StringBuffer,%20java.lang.String)[here].
2014-09-26 15:04:42 -04:00
Set `include_aliases` to `false` to prevent aliases from being restored together with associated indices
2013-11-08 19:20:43 -05:00
[source,js]
-----------------------------------
$ curl -XPOST "localhost:9200/_snapshot/my_backup/snapshot_1/_restore" -d '{
"indices": "index_1,index_2",
2014-01-15 12:57:05 -05:00
"ignore_unavailable": "true",
2013-11-08 19:20:43 -05:00
"include_global_state": false,
2014-02-11 09:23:07 -05:00
"rename_pattern": "index_(.+)",
2013-11-08 19:20:43 -05:00
"rename_replacement": "restored_index_$1"
}'
-----------------------------------
The restore operation can be performed on a functioning cluster. However, an existing index can be only restored if it's
2014-08-14 12:29:09 -04:00
<<indices-open-close,closed>>. The restore operation automatically opens restored indices if they were closed and creates new indices if they
2013-11-08 19:20:43 -05:00
didn't exist in the cluster. If cluster state is restored, the restored templates that don't currently exist in the
cluster are added and existing templates with the same name are replaced by the restored templates. The restored
persistent settings are added to the existing persistent settings.
2014-03-15 18:58:18 -04:00
2014-06-01 15:13:13 -04:00
[float]
=== Partial restore
By default, entire restore operation will fail if one or more indices participating in the operation don't have
snapshots of all shards available. It can occur if some shards failed to snapshot for example. It is still possible to
restore such indices by setting `partial` to `true`. Please note, that only successfully snapshotted shards will be
restored in this case and all missing shards will be recreated empty.
2014-03-15 18:58:18 -04:00
[float]
=== Snapshot status
A list of currently running snapshots with their detailed status information can be obtained using the following command:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/_status"
-----------------------------------
In this format, the command will return information about all currently running snapshots. By specifying a repository name, it's possible
to limit the results to a particular repository:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/_status"
-----------------------------------
If both repository name and snapshot id are specified, this command will return detailed status information for the given snapshot even
if it's not currently running:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1/_status"
-----------------------------------
Multiple ids are also supported:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1,snapshot_2/_status"
-----------------------------------
2014-06-11 20:52:45 -04:00
[float]
=== Monitoring snapshot/restore progress
There are several ways to monitor the progress of the snapshot and restores processes while they are running. Both
operations support `wait_for_completion` parameter that would block client until the operation is completed. This is
the simplest method that can be used to get notified about operation completion.
The snapshot operation can be also monitored by periodic calls to the snapshot info:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1"
-----------------------------------
Please note that snapshot info operation is using the same resources and thread pool as the snapshot operation. So,
executing snapshot info operation while large shards are being snapshotted can cause the snapshot info operation to wait
for available resources before returning the result. On very large shards the wait time can be significant.
To get more immediate and complete information about snapshots the snapshot status command can be used instead:
[source,shell]
-----------------------------------
$ curl -XGET "localhost:9200/_snapshot/my_backup/snapshot_1/_status"
-----------------------------------
While snapshot info method returns only basic information about the snapshot in progress, the snapshot status returns
complete breakdown of the current state for each shard participating in the snapshot.
The restore process piggybacks on the standard recovery mechanism of the Elasticsearch. As a result, standard recovery
monitoring services can be used to monitor the state of restore. When restore operation is executed the cluster
typically goes into `red` state. It happens because the restore operation starts with "recovering" primary shards of the
restored indices. During this operation the primary shards become unavailable which manifests itself in the `red` cluster
state. Once recovery of primary shards is completed Elasticsearch is switching to standard replication process that
creates the required number of replicas at this moment cluster switches to the `yellow` state. Once all required replicas
are created, the cluster switches to the `green` states.
The cluster health operation provides only a high level status of the restore process. It’ s possible to get more
detailed insight into the current state of the recovery process by using <<indices-recovery, indices recovery>> and
<<cat-recovery, cat recovery>> APIs.
[float]
=== Stopping currently running snapshot and restore operations
The snapshot and restore framework allows running only one snapshot or one restore operation at time. If currently
running snapshot was executed by mistake or takes unusually long, it can be terminated using snapshot delete operation.
The snapshot delete operation checks if deleted snapshot is currently running and if it does, the delete operation stops
such snapshot before deleting the snapshot data from the repository.
The restore operation is using standard shard recovery mechanism. Therefore, any currently running restore operation can
be canceled by deleting indices that are being restored. Please note that data for all deleted indices will be removed
from the cluster as a result of this operation.