Docs: Document `failures` on reindex and friends
We already had *some* documentation of the batch nature of `reindex` and friends but it wasn't super obvious how it interacted with the `failures` element in the response. This adds some more documentation the `failures` element.
This commit is contained in:
parent
c08daf2589
commit
a7e69b07a1
|
@ -284,9 +284,12 @@ executed again in order to conform to `requests_per_second`.
|
||||||
|
|
||||||
`failures`::
|
`failures`::
|
||||||
|
|
||||||
Array of all indexing failures. If this is non-empty then the request aborted
|
Array of failures if there were any unrecoverable errors during the process. If
|
||||||
because of those failures. See `conflicts` for how to prevent version conflicts
|
this is non-empty then the request aborted because of those failures.
|
||||||
from aborting the operation.
|
Delete-by-query is implemented using batches and any failure causes the entire
|
||||||
|
process to abort but all failures in the current batch are collected into the
|
||||||
|
array. You can use the `conflicts` option to prevent reindex from aborting on
|
||||||
|
version conflicts.
|
||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
|
|
@ -161,12 +161,12 @@ POST _reindex
|
||||||
|
|
||||||
`index` and `type` in `source` can both be lists, allowing you to copy from
|
`index` and `type` in `source` can both be lists, allowing you to copy from
|
||||||
lots of sources in one request. This will copy documents from the `_doc` and
|
lots of sources in one request. This will copy documents from the `_doc` and
|
||||||
`post` types in the `twitter` and `blog` index. The copied documents would include the
|
`post` types in the `twitter` and `blog` index. The copied documents would include the
|
||||||
`post` type in the `twitter` index and the `_doc` type in the `blog` index. For more
|
`post` type in the `twitter` index and the `_doc` type in the `blog` index. For more
|
||||||
specific parameters, you can use `query`.
|
specific parameters, you can use `query`.
|
||||||
|
|
||||||
The Reindex API makes no effort to handle ID collisions. For such issues, the target index
|
The Reindex API makes no effort to handle ID collisions. For such issues, the target index
|
||||||
will remain valid, but it's not easy to predict which document will survive because
|
will remain valid, but it's not easy to predict which document will survive because
|
||||||
the iteration order isn't well defined.
|
the iteration order isn't well defined.
|
||||||
|
|
||||||
[source,js]
|
[source,js]
|
||||||
|
@ -666,9 +666,11 @@ executed again in order to conform to `requests_per_second`.
|
||||||
|
|
||||||
`failures`::
|
`failures`::
|
||||||
|
|
||||||
Array of all indexing failures. If this is non-empty then the request aborted
|
Array of failures if there were any unrecoverable errors during the process. If
|
||||||
because of those failures. See `conflicts` for how to prevent version conflicts
|
this is non-empty then the request aborted because of those failures. Reindex
|
||||||
from aborting the operation.
|
is implemented using batches and any failure causes the entire process to abort
|
||||||
|
but all failures in the current batch are collected into the array. You can use
|
||||||
|
the `conflicts` option to prevent reindex from aborting on version conflicts.
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
[[docs-reindex-task-api]]
|
[[docs-reindex-task-api]]
|
||||||
|
@ -1004,7 +1006,7 @@ number for most indices. If slicing manually or otherwise tuning
|
||||||
automatic slicing, use these guidelines.
|
automatic slicing, use these guidelines.
|
||||||
|
|
||||||
Query performance is most efficient when the number of `slices` is equal to the
|
Query performance is most efficient when the number of `slices` is equal to the
|
||||||
number of shards in the index. If that number is large (e.g. 500),
|
number of shards in the index. If that number is large (e.g. 500),
|
||||||
choose a lower number as too many `slices` will hurt performance. Setting
|
choose a lower number as too many `slices` will hurt performance. Setting
|
||||||
`slices` higher than the number of shards generally does not improve efficiency
|
`slices` higher than the number of shards generally does not improve efficiency
|
||||||
and adds overhead.
|
and adds overhead.
|
||||||
|
@ -1018,7 +1020,7 @@ documents being reindexed and cluster resources.
|
||||||
[float]
|
[float]
|
||||||
=== Reindex daily indices
|
=== Reindex daily indices
|
||||||
|
|
||||||
You can use `_reindex` in combination with <<modules-scripting-painless, Painless>>
|
You can use `_reindex` in combination with <<modules-scripting-painless, Painless>>
|
||||||
to reindex daily indices to apply a new template to the existing documents.
|
to reindex daily indices to apply a new template to the existing documents.
|
||||||
|
|
||||||
Assuming you have indices consisting of documents as follows:
|
Assuming you have indices consisting of documents as follows:
|
||||||
|
|
|
@ -338,9 +338,13 @@ executed again in order to conform to `requests_per_second`.
|
||||||
|
|
||||||
`failures`::
|
`failures`::
|
||||||
|
|
||||||
Array of all indexing failures. If this is non-empty then the request aborted
|
Array of failures if there were any unrecoverable errors during the process. If
|
||||||
because of those failures. See `conflicts` for how to prevent version conflicts
|
this is non-empty then the request aborted because of those failures.
|
||||||
from aborting the operation.
|
Update-by-query is implemented using batches and any failure causes the entire
|
||||||
|
process to abort but all failures in the current batch are collected into the
|
||||||
|
array. You can use the `conflicts` option to prevent reindex from aborting on
|
||||||
|
version conflicts.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
[float]
|
[float]
|
||||||
|
|
Loading…
Reference in New Issue