Set shard count limit to unlimited (#24012)

Now that we have incremental reduce functions for topN and aggregations
we can set the default for `action.search.shard_count.limit` to unlimited.
This still allows users to restrict these settings while by default we executed
across all shards matching the search requests index pattern.
This commit is contained in:
Simon Willnauer 2017-04-10 17:09:21 +02:00 committed by GitHub
parent 8cfb9e446c
commit 040b86a76b
2 changed files with 8 additions and 7 deletions

View File

@ -60,7 +60,7 @@ public class TransportSearchAction extends HandledTransportAction<SearchRequest,
/** The maximum number of shards for a single search request. */ /** The maximum number of shards for a single search request. */
public static final Setting<Long> SHARD_COUNT_LIMIT_SETTING = Setting.longSetting( public static final Setting<Long> SHARD_COUNT_LIMIT_SETTING = Setting.longSetting(
"action.search.shard_count.limit", 1000L, 1L, Property.Dynamic, Property.NodeScope); "action.search.shard_count.limit", Long.MAX_VALUE, 1L, Property.Dynamic, Property.NodeScope);
private final ClusterService clusterService; private final ClusterService clusterService;
private final SearchTransportService searchTransportService; private final SearchTransportService searchTransportService;

View File

@ -60,9 +60,10 @@ GET /_search?q=tag:wow
// CONSOLE // CONSOLE
// TEST[setup:twitter] // TEST[setup:twitter]
By default elasticsearch rejects search requests that would query more than By default elasticsearch doesn't reject any search requests based on the number
1000 shards. The reason is that such large numbers of shards make the job of of shards the request hits. While elasticsearch will optimize the search execution
the coordinating node very CPU and memory intensive. It is usually a better on the coordinating node a large number of shards can have a significant impact
idea to organize data in such a way that there are fewer larger shards. In CPU and memory wise. It is usually a better idea to organize data in such a way
case you would like to bypass this limit, which is discouraged, you can update that there are fewer larger shards. In case you would like to configure a soft
the `action.search.shard_count.limit` cluster setting to a greater value. limit, you can update the `action.search.shard_count.limit` cluster setting in order
to reject search requests that hit too many shards.