OpenSearch/docs/reference/cat/recovery.asciidoc

85 lines
3.9 KiB
Plaintext
Raw Normal View History

2013-11-14 20:14:39 -05:00
[[cat-recovery]]
=== cat recovery
2013-11-14 20:14:39 -05:00
The `recovery` command is a view of index shard recoveries, both on-going and previously
completed. It is a more compact view of the JSON <<indices-recovery,recovery>> API.
2013-11-14 20:14:39 -05:00
A recovery event occurs anytime an index shard moves to a different node in the cluster.
This can happen during a snapshot recovery, a change in replication level, node failure, or
on node startup. This last type is called a local store recovery and is the normal
way for shards to be loaded from disk when a node starts up.
As an example, here is what the recovery state of a cluster may look like when there
are no shards in transit from one node to another:
2013-11-14 20:14:39 -05:00
[source,js]
----------------------------------------------------------------------------
GET _cat/recovery?v
----------------------------------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
The response of this request will be something like:
Enforce that responses in docs are valid json (#26249) All of the snippets in our docs marked with `// TESTRESPONSE` are checked against the response from Elasticsearch but, due to the way they are implemented they are actually parsed as YAML instead of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately that means that invalid JSON has snuck into the exmples! This adds a step during the build to parse them as JSON and fail the build if they don't parse. But no! It isn't quite that simple. The displayed text of some of these responses looks like: ``` { ... "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` Note the `...` which isn't valid json but we like it anyway and want it in the output. We use substitution rules to convert the `...` into the response we expect. That yields a response that looks like: ``` { "took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits, "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` That is what the tests consume but it isn't valid JSON! Oh no! We don't want to go update all the substitution rules because that'd be huge and, ultimately, wouldn't buy much. So we quote the `$body.took` bits before parsing the JSON. Note the responses that we use for the `_cat` APIs are all converted into regexes and there is no expectation that they are valid JSON. Closes #26233
2017-08-17 09:02:10 -04:00
[source,txt]
---------------------------------------------------------------------------
index shard time type stage source_host source_node target_host target_node repository snapshot files files_recovered files_percent files_total bytes bytes_recovered bytes_percent bytes_total translog_ops translog_ops_recovered translog_ops_percent
twitter 0 13ms store done n/a n/a 127.0.0.1 node-0 n/a n/a 0 0 100% 13 0 0 100% 9928 0 0 100.0%
---------------------------------------------------------------------------
// TESTRESPONSE[s/store/empty_store/]
// TESTRESPONSE[s/100%/0.0%/]
// TESTRESPONSE[s/9928/0/]
// TESTRESPONSE[s/13ms/[0-9.]+m?s/]
// TESTRESPONSE[s/13/\\d+/ non_json]
In the above case, the source and target nodes are the same because the recovery
type was store, i.e. they were read from local storage on node start.
Now let's see what a live recovery looks like. By increasing the replica count
of our index and bringing another node online to host the replicas, we can see
what a live shard recovery looks like.
[source,js]
----------------------------------------------------------------------------
GET _cat/recovery?v&h=i,s,t,ty,st,shost,thost,f,fp,b,bp
----------------------------------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
This will return a line like:
Enforce that responses in docs are valid json (#26249) All of the snippets in our docs marked with `// TESTRESPONSE` are checked against the response from Elasticsearch but, due to the way they are implemented they are actually parsed as YAML instead of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately that means that invalid JSON has snuck into the exmples! This adds a step during the build to parse them as JSON and fail the build if they don't parse. But no! It isn't quite that simple. The displayed text of some of these responses looks like: ``` { ... "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` Note the `...` which isn't valid json but we like it anyway and want it in the output. We use substitution rules to convert the `...` into the response we expect. That yields a response that looks like: ``` { "took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits, "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` That is what the tests consume but it isn't valid JSON! Oh no! We don't want to go update all the substitution rules because that'd be huge and, ultimately, wouldn't buy much. So we quote the `$body.took` bits before parsing the JSON. Note the responses that we use for the `_cat` APIs are all converted into regexes and there is no expectation that they are valid JSON. Closes #26233
2017-08-17 09:02:10 -04:00
[source,txt]
----------------------------------------------------------------------------
i s t ty st shost thost f fp b bp
twitter 0 1252ms peer done 192.168.1.1 192.168.1.2 0 100.0% 0 100.0%
----------------------------------------------------------------------------
// TESTRESPONSE[s/peer/empty_store/]
// TESTRESPONSE[s/192.168.1.2/127.0.0.1/]
// TESTRESPONSE[s/192.168.1.1/n\/a/]
// TESTRESPONSE[s/100.0%/0.0%/]
// TESTRESPONSE[s/1252ms/[0-9.]+m?s/ non_json]
We can see in the above listing that our thw twitter shard was recovered from another node.
Notice that the recovery type is shown as `peer`. The files and bytes copied are
real-time measurements.
Finally, let's see what a snapshot recovery looks like. Assuming I have previously
made a backup of my index, I can restore it using the <<modules-snapshots,snapshot and restore>>
API.
2013-11-14 20:14:39 -05:00
[source,js]
--------------------------------------------------------------------------------
GET _cat/recovery?v&h=i,s,t,ty,st,rep,snap,f,fp,b,bp
--------------------------------------------------------------------------------
// CONSOLE
// TEST[skip:no need to execute snapshot/restore here]
This will show a recovery of type snapshot in the response
Enforce that responses in docs are valid json (#26249) All of the snippets in our docs marked with `// TESTRESPONSE` are checked against the response from Elasticsearch but, due to the way they are implemented they are actually parsed as YAML instead of JSON. Luckilly, all valid JSON is valid YAML! Unfurtunately that means that invalid JSON has snuck into the exmples! This adds a step during the build to parse them as JSON and fail the build if they don't parse. But no! It isn't quite that simple. The displayed text of some of these responses looks like: ``` { ... "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` Note the `...` which isn't valid json but we like it anyway and want it in the output. We use substitution rules to convert the `...` into the response we expect. That yields a response that looks like: ``` { "took": $body.took,"timed_out": false,"_shards": $body._shards,"hits": $body.hits, "aggregations": { "range": { "buckets": [ { "to": 1.4436576E12, "to_as_string": "10-2015", "doc_count": 7, "key": "*-10-2015" }, { "from": 1.4436576E12, "from_as_string": "10-2015", "doc_count": 0, "key": "10-2015-*" } ] } } } ``` That is what the tests consume but it isn't valid JSON! Oh no! We don't want to go update all the substitution rules because that'd be huge and, ultimately, wouldn't buy much. So we quote the `$body.took` bits before parsing the JSON. Note the responses that we use for the `_cat` APIs are all converted into regexes and there is no expectation that they are valid JSON. Closes #26233
2017-08-17 09:02:10 -04:00
[source,txt]
--------------------------------------------------------------------------------
i s t ty st rep snap f fp b bp
twitter 0 1978ms snapshot done twitter snap_1 79 8.0% 12086 9.0%
--------------------------------------------------------------------------------
// TESTRESPONSE[non_json]