Move search concurrency and parallelism paragraphs

These paragraphs should be on the top-level search page for visibility
so this commit moves them, and puts them under a clear heading.
This commit is contained in:
Jason Tedor 2018-02-26 07:47:57 -08:00
parent beb8b10556
commit fb073216b1
2 changed files with 20 additions and 15 deletions

View File

@ -137,6 +137,26 @@ However, it comes with an additional overhead of more frequent cancellation
checks that can be noticeable on large fast running search queries. Changing this
setting only affects the searches that start after the change is made.
[float]
[[search-concurrency-and-parallelism]]
== Search concurrency and parallelism
By default Elasticsearch doesn't reject any search requests based on the number
of shards the request hits. While Elasticsearch will optimize the search
execution on the coordinating node a large number of shards can have a
significant impact CPU and memory wise. It is usually a better idea to organize
data in such a way that there are fewer larger shards. In case you would like to
configure a soft limit, you can update the `action.search.shard_count.limit`
cluster setting in order to reject search requests that hit too many shards.
The request parameter `max_concurrent_shard_requests` can be used to control the
maximum number of concurrent shard requests the search API will execute for the
request. This parameter should be used to protect a single request from
overloading a cluster (e.g., a default request will hit all indices in a cluster
which could cause shard request rejections if the number of shards per node is
high). This default is based on the number of data nodes in the cluster but at
most `256`.
--
include::search/search.asciidoc[]

View File

@ -59,18 +59,3 @@ GET /_search?q=tag:wow
--------------------------------------------------
// CONSOLE
// TEST[setup:twitter]
By default Elasticsearch doesn't reject any search requests based on the number
of shards the request hits. While Elasticsearch will optimize the search execution
on the coordinating node a large number of shards can have a significant impact
CPU and memory wise. It is usually a better idea to organize data in such a way
that there are fewer larger shards. In case you would like to configure a soft
limit, you can update the `action.search.shard_count.limit` cluster setting in order
to reject search requests that hit too many shards.
The search's `max_concurrent_shard_requests` request parameter can be used to control
the maximum number of concurrent shard requests the search API will execute for this request.
This parameter should be used to protect a single request from overloading a cluster ie. a default
request will hit all indices in a cluster which could cause shard request rejections if the
number of shards per node is high. This default is based on the number of data nodes in
the cluster but at most `256`.