Added docs for batched_reduce_size

Relates to #23288
This commit is contained in:
Clinton Gormley 2017-05-02 14:25:03 +02:00
parent 7b7cc488da
commit 582b3c06b6
2 changed files with 13 additions and 0 deletions

View File

@ -93,6 +93,14 @@ And here is a sample response:
the query execution has actually terminated_early. Defaults to no
terminate_after.
`batched_reduce_size`::
The number of shard results that should be reduced at once on the
coordinating node. This value should be used as a protection mechanism to
reduce the memory overhead per search request if the potential number of
shards in the request can be large.
Out of the above, the `search_type` and the `request_cache` must be passed as
query-string parameters. The rest of the search request should be passed

View File

@ -67,6 +67,11 @@ query.
|`analyze_wildcard` |Should wildcard and prefix queries be analyzed or
not. Defaults to `false`.
|`batched_reduce_size` | The number of shard results that should be reduced
at once on the coordinating node. This value should be used as a protection
mechanism to reduce the memory overhead per search request if the potential
number of shards in the request can be large.
|`default_operator` |The default operator to be used, can be `AND` or
`OR`. Defaults to `OR`.