Minor edits to text in Reindex API doc (#39137)

This commit is contained in:
Darren Meiss 2019-02-25 10:38:10 -05:00 committed by Luca Cavanna
parent dc23be5a9d
commit 8f0d864ae1
1 changed files with 13 additions and 13 deletions

View File

@ -118,8 +118,8 @@ POST _reindex
// CONSOLE
// TEST[setup:twitter]
By default version conflicts abort the `_reindex` process but you can just
count them by settings `"conflicts": "proceed"` in the request body:
By default, version conflicts abort the `_reindex` process, but you can just
count them by setting `"conflicts": "proceed"` in the request body:
[source,js]
--------------------------------------------------
@ -423,7 +423,7 @@ POST _reindex
// TEST[s/"password": "pass"//]
The `host` parameter must contain a scheme, host, port (e.g.
`https://otherhost:9200`) and optional path (e.g. `https://otherhost:9200/proxy`).
`https://otherhost:9200`), and optional path (e.g. `https://otherhost:9200/proxy`).
The `username` and `password` parameters are optional, and when they are present `_reindex`
will connect to the remote Elasticsearch node using basic auth. Be sure to use `https` when
using basic auth or the password will be sent in plain text.
@ -434,7 +434,7 @@ Remote hosts have to be explicitly whitelisted in elasticsearch.yml using the
`reindex.remote.whitelist` property. It can be set to a comma delimited list
of allowed remote `host` and `port` combinations (e.g.
`otherhost:9200, another:9200, 127.0.10.*:9200, localhost:*`). Scheme is
ignored by the whitelist - only host and port are used, for example:
ignored by the whitelist -- only host and port are used, for example:
[source,yaml]
@ -618,7 +618,7 @@ Defaults to the keystore password. This setting cannot be used with
In addition to the standard parameters like `pretty`, the Reindex API also
supports `refresh`, `wait_for_completion`, `wait_for_active_shards`, `timeout`,
`scroll` and `requests_per_second`.
`scroll`, and `requests_per_second`.
Sending the `refresh` url parameter will cause all indexes to which the request
wrote to be refreshed. This is different than the Index API's `refresh`
@ -642,7 +642,7 @@ the `scroll` parameter to control how long it keeps the "search context" alive,
(e.g. `?scroll=10m`). The default value is 5 minutes.
`requests_per_second` can be set to any positive decimal number (`1.4`, `6`,
`1000`, etc) and throttles the rate at which `_reindex` issues batches of index
`1000`, etc.) and throttles the rate at which `_reindex` issues batches of index
operations by padding each batch with a wait time. The throttling can be
disabled by setting `requests_per_second` to `-1`.
@ -839,7 +839,7 @@ The response looks like:
}
--------------------------------------------------
// TESTRESPONSE
<1> this object contains the actual status. It is identical to the response JSON
<1> This object contains the actual status. It is identical to the response JSON
except for the important addition of the `total` field. `total` is the total number
of operations that the `_reindex` expects to perform. You can estimate the
progress by adding the `updated`, `created`, and `deleted` fields. The request
@ -867,7 +867,7 @@ you to delete that document.
[[docs-reindex-cancel-task-api]]
=== Works with the Cancel Task API
Any Reindex can be canceled using the <<task-cancellation,Task Cancel API>>. For
Any reindex can be canceled using the <<task-cancellation,Task Cancel API>>. For
example:
[source,js]
@ -900,8 +900,8 @@ The task ID can be found using the <<tasks,tasks API>>.
Just like when setting it on the Reindex API, `requests_per_second`
can be either `-1` to disable throttling or any decimal number
like `1.7` or `12` to throttle to that level. Rethrottling that speeds up the
query takes effect immediately but rethrotting that slows down the query will
take effect on after completing the current batch. This prevents scroll
query takes effect immediately, but rethrottling that slows down the query will
take effect after completing the current batch. This prevents scroll
timeouts.
[float]
@ -1112,7 +1112,7 @@ be larger than others. Expect larger slices to have a more even distribution.
are distributed proportionally to each sub-request. Combine that with the point
above about distribution being uneven and you should conclude that the using
`size` with `slices` might not result in exactly `size` documents being
`_reindex`ed.
reindexed.
* Each sub-request gets a slightly different snapshot of the source index,
though these are all taken at approximately the same time.
@ -1145,7 +1145,7 @@ partially completed index and starting over at that index. It also makes
parallelizing the process fairly simple: split the list of indices to reindex
and run each list in parallel.
One off bash scripts seem to work nicely for this:
One-off bash scripts seem to work nicely for this:
[source,bash]
----------------------------------------------------------------
@ -1217,7 +1217,7 @@ GET metricbeat-2016.05.31-1/_doc/1
// CONSOLE
// TEST[continued]
The previous method can also be used in conjunction with <<docs-reindex-change-name, change the name of a field>>
The previous method can also be used in conjunction with <<docs-reindex-change-name, changing a field name>>
to load only the existing data into the new index and rename any fields if needed.
[float]