OpenSearch/docs/reference/cat/recovery.asciidoc

73 lines
4.2 KiB
Plaintext
Raw Normal View History

2013-11-14 20:14:39 -05:00
[[cat-recovery]]
== cat recovery
2013-11-14 20:14:39 -05:00
The `recovery` command is a view of index shard recoveries, both on-going and previously
completed. It is a more compact view of the JSON <<indices-recovery,recovery>> API.
2013-11-14 20:14:39 -05:00
A recovery event occurs anytime an index shard moves to a different node in the cluster.
This can happen during a snapshot recovery, a change in replication level, node failure, or
on node startup. This last type is called a local store recovery and is the normal
way for shards to be loaded from disk when a node starts up.
As an example, here is what the recovery state of a cluster may look like when there
are no shards in transit from one node to another:
2013-11-14 20:14:39 -05:00
[source,sh]
----------------------------------------------------------------------------
> curl -XGET 'localhost:9200/_cat/recovery?v'
index shard time type stage source_host target_host repository snapshot files files_percent bytes bytes_percent total_files total_bytes translog translog_percent total_translog
index 0 87ms store done 127.0.0.1 127.0.0.1 n/a n/a 0 0.0% 0 0.0% 0 0 0 100.0% 0
index 1 97ms store done 127.0.0.1 127.0.0.1 n/a n/a 0 0.0% 0 0.0% 0 0 0 100.0% 0
index 2 93ms store done 127.0.0.1 127.0.0.1 n/a n/a 0 0.0% 0 0.0% 0 0 0 100.0% 0
index 3 90ms store done 127.0.0.1 127.0.0.1 n/a n/a 0 0.0% 0 0.0% 0 0 0 100.0% 0
index 4 9ms store done 127.0.0.1 127.0.0.1 n/a n/a 0 0.0% 0 0.0% 0 0 0 100.0% 0
---------------------------------------------------------------------------
In the above case, the source and target nodes are the same because the recovery
type was store, i.e. they were read from local storage on node start.
Now let's see what a live recovery looks like. By increasing the replica count
of our index and bringing another node online to host the replicas, we can see
what a live shard recovery looks like.
[source,sh]
----------------------------------------------------------------------------
> curl -XPUT 'localhost:9200/wiki/_settings' -d'{"number_of_replicas":1}'
{"acknowledged":true}
> curl -XGET 'localhost:9200/_cat/recovery?v&h=i,s,t,ty,st,shost,thost,f,fp,b,bp'
i s t ty st shost thost f fp b bp
wiki 0 1252ms store done hostA hostA 4 100.0% 23638870 100.0%
wiki 0 1672ms replica index hostA hostB 4 75.0% 23638870 48.8%
wiki 1 1698ms replica index hostA hostB 4 75.0% 23348540 49.4%
wiki 1 4812ms store done hostA hostA 33 100.0% 24501912 100.0%
wiki 2 1689ms replica index hostA hostB 4 75.0% 28681851 40.2%
wiki 2 5317ms store done hostA hostA 36 100.0% 30267222 100.0%
----------------------------------------------------------------------------
We can see in the above listing that our 3 initial shards are in various stages
of being replicated from one node to another. Notice that the recovery type is
shown as `replica`. The files and bytes copied are real-time measurements.
Finally, let's see what a snapshot recovery looks like. Assuming I have previously
made a backup of my index, I can restore it using the <<modules-snapshots,snapshot and restore>>
API.
2013-11-14 20:14:39 -05:00
[source,sh]
--------------------------------------------------------------------------------
> curl -XPOST 'localhost:9200/_snapshot/imdb/snapshot_2/_restore'
{"acknowledged":true}
> curl -XGET 'localhost:9200/_cat/recovery?v&h=i,s,t,ty,st,rep,snap,f,fp,b,bp'
i s t ty st rep snap f fp b bp
imdb 0 1978ms snapshot done imdb snap_1 79 8.0% 12086 9.0%
imdb 1 2790ms snapshot index imdb snap_1 88 7.7% 11025 8.1%
imdb 2 2790ms snapshot index imdb snap_1 85 0.0% 12072 0.0%
imdb 3 2796ms snapshot index imdb snap_1 85 2.4% 12048 7.2%
imdb 4 819ms snapshot init imdb snap_1 0 0.0% 0 0.0%
--------------------------------------------------------------------------------